Blog Post #3

February 9, 2023

Assigned Project: Training a Convolutional Neural Net to Detect Epileptic Spikes in EEG

Progress Topic: Utilizing MNE to characterize EEG Data and waveforms of interest

It’s been about two weeks since my last post and it’s incredible how much one can learn, work on, and troubleshoot in two weeks. Previously I stated that we’d be working with EEG data but where to start. I initially tried to start with an EEG plugin for MatLab, but after talking with our project sponsor, he suggested I use MNE instead. MNE is an open source Python module that gives us the ability to process and visualize EEG waveforms. I brushed up more on how EEG waveforms come to be, electrode placement, the 10-20 configuration, and montages that can be created. In short, electrodes are placed on the forehead in an internationally established 10-20 configuration, and these numbers correspond to the percentage of total distance they are as they run along a particular axis. You can see the electrode placements below in Figure 1, where the nasion and inion are two referential points. The nasion corresponds to the space above the ridge of your nose and below your brow. The inion refers to the most prominent part of the back of your head. The letters correspond to the location of the head: frontal (F), temporal (T), central (C), and occipital (O). As it relates to the numbers, the farther out you are from the center (or zero (Z)) the larger the value. You’ll also notice that the odd numbers correspond to the left side of the head and the even numbers correspond to the right side. For completion, the “A” corresponds to auricular, or the ears.

Figure 1: electrode placement on scalp that corresponds to the 10-20 system

A waveform then arises from taking the difference between the electric potential of two points. You take these differences following a particular order and you get a montage, e.g. bipolar montage (taking the difference between adjacent electrodes), average reference montage (taking the difference between an electrode and the average potential of all the electrodes), etc. Many bodies of EEG data, including our own, are presented in a referential manner as to allow for transformation into other montage styles. I therefore used MNE to take in the raw EEG data, pre-process it by adding filters and taking differences that correspond to a specific kind of bipolar montage. Pre-process? Filter? New terms! Time to introduce them. EEG recordings are susceptible to multiple forms of artifacts and therefore must first be pre-processed and filtered before any information, such as seizure activity, can be viewed. A common artifact comes from the electrical current from plugged in devices, such as computers, referred to as AC artifacts. Normally if two electrodes are within the same vicinity, as they are on the scalp, both electrodes will pick up approximately the same amount of noise and therefore when taking a difference, the AC artifact will “cancel out.” If an electrode is poorly placed, then the electric potential will be greater and therefore visible in our resulting waveform. A common filter used in EEG is the notch filter which is set at 60Hz can help remove these artifacts. You can see a common electrical plug displaying the 60Hz AC input in Figure 2.

Figure 2: Most US electrical adapters are set at 60 Hz which can appear on EEG recordings

It seems like the best way to prevent this type of artifact would be to ensure that the electrodes are all adequately placed, i.e. no loose electrodes, but there is one more source of artifacts that cannot be prevented and it involves our eyes. Whenever someone closes their eyes, the eyes roll upwards according to Bell’s phenomenon. It turns out that our eyeballs are actually a dipole, meaning that each end of the eye has opposite electrical charge. The cornea, which is the front of the eye has a positive charge, and the retina, which corresponds to the back of the eye has a negative charge. These slight electrical differences are picked up by frontal electrodes and can appear as a low frequency deflection on an EEG, see Figure 3.

Figure 3: EEG waveforms with arrow pointing at deflections that are a result of eye movement. EEG courtesy of eegpedia (Link: http://www.eegpedia.org/index.php?title=Eye_movements_artifact)

There are specific waveforms that we are looking for in our project that correspond to epileptic waveforms. I am unfortunately not able to show exact examples of what I am working on in order to adhere by the non-disclosure agreement I signed, but working with our data set and MNE has allowed me to visually present to my group what we are looking for. I will continue to work with MNE python in order to better refine filtering parameters as this is the data that we will be training our recognition pipeline out of.

This project is a collaborative project, so in a similar way that I presented how EEG waveforms come to be, I created a powerpoint that I presented to my team as to keep us all on the same page. I am very appreciative of my team as in the same way that I share this knowledge, they too share the knowledge that they have collected over the past few weeks. We all do our best to stay on the same page and our project mentor does his part to keep us on the right projected track.

What’s on deck: I have learned a great deal as it relates to EEG waveforms and I’m excited to get to learn about how to train our machine learning model as well. More to come so stay tuned!