From Noise to Knowledge: Processing EEG data for BCI application

July 4, 2023

Introduction

In the realm of Brain-Computer Interfaces (BCIs), signal processing plays a crucial role in transforming raw EEG data into meaningful information. In this article, we delve into the second step of creating a BCI: the cleaning and analysis of EEG data. Building upon our previous article on designing and conducting an EEG experiment, we explore the essential tasks performed by the signal processing team at neuroTUM to unlock the potential of brain activity.

Figure 1: Schematic of a BCI system

Understanding EEG Data

EEG data provides valuable insights into the electrical activity of the brain, allowing us to decode the information encoded within it. EEG signals are obtained by measuring voltage fluctuations from different neuron populations in the brain using electrodes placed on the scalp. These voltage fluctuations represent the manifest as patterns over time and represent cognitive tasks and neural firing activity.

However, the recorded EEG data is not purely informative. It also contains noise and captures data we may not be interested in (such as eye blinks), which can hinder our understanding of the underlying, currently relevant, neural patterns. Signal processing techniques are employed to disentangle the relevant information from the rest, enabling us to extract meaningful insights from the EEG signals.

Figure 2: Example data, that has been graphed to show the voltage fluctuations over time. Each channel represents one electrode placed on the scalp. Additionally, one can see coloured lines on the plot that indicate markers. These tell us when certain things happened in the experiment. For example, the pilot sees the cure to imagine left hand movement. These markers are very important because they allow us to temporarily align what was happening in the brain (which we can figure out from the eeg data), with what was happening on the screen or in general during the experiment.

The Steps of Signal Processing

Inside the signal processing "box," the data goes through various stages of preprocessing and analysis. The exact steps and algorithms used to process a dataset heavily depends on the experiment and type of analysis you are interested in. Signal processing procedures differ based on whether the analysis is performed online (in real-time) or offline.  Online processing is necessary for the BCI application, while the offline processing obtains training data for the machine learning model.

In offline signal processing, the entire recorded dataset is available for analysis. Data is imported into a Python script, and preprocessing steps, including noise filtering, epoch segmentation, and artifact removal, are applied to optimize the data. Power spectral density analysis aids in understanding the frequency content, while the resulting clean data is often used for training deep learning models.

Real-time signal processing, on the other hand, focuses on processing a continuous stream of data in real-time. Since only a time window of recently recorded data is available, the preprocessing steps are adapted to operate on smaller segments of data. Real-time signal processing is vital for applications like gaming interfaces, where immediate interaction with the BCI system is required.

Let's explore the key steps involved in our current offline processing segment:

Data Import and Bandpass Filtering: First the data set recorded by the experimental design team is imported into a python script. We make use of a library called mne python, with which we create an object that contains all the eeg data and relevant information about it (stay tuned for more detailed articles on mne python and how we use it at neuroTUM). We then filter out all the frequencies that we aren't interested in, using a bandpass filter from 1 to 30 Hz on all channels.

Epoch Segmentation: For the offline processing, tThe recorded data is further segmented into smaller chunks of data called epochs using markers. These markers, synchronized with the experiment, indicate specific events, such as cue presentation or task performance. Epoch segmentation allows us to isolate the data relevant to particular tasks or events for further analysis.

Artifact Removal: Artifacts, such as eye blinks or muscle activity, can distort the EEG signals. Techniques like ICA are employed to separate these artifacts from the brain-related signals, ensuring the accuracy and reliability of subsequent analyses.

Power Spectral Density Analysis: To gain insights into the frequency characteristics of the EEG signals, power spectral density analysis is performed. This analysis helps identify rhythmic activities, such as alpha, beta, and gamma waves, which are associated with different cognitive processes. Visualizing the power spectral density provides a comprehensive view of the frequency content in the EEG data. The power spectral

Channel Assessment: In some cases, certain EEG channels may exhibit poor signal quality due to technical issues or physiological factors. Identifying and excluding these "bad channels" from further processing is necessary to maintain data integrity and prevent misleading results.

Conclusion

Signal processing serves as a vital bridge between raw EEG data and meaningful insights in the field of Brain-Computer Interfaces. By applying techniques such as noise filtering, artifact removal, and power spectral density analysis, signal processing teams unlock the hidden potential of brain activity. Whether it is offline analysis for training deep learning models or real-time processing for interactive applications, signal processing plays a pivotal role in advancing the capabilities of Brain-Computer Interfaces and paving the way for new frontiers in neuroscience and human-computer interaction.

Article written by Isabel

Edited by Charlie

could put sponsor info here
or a contact us thing... just remember that this shows up on all pages
Start Now