Blog Post #2

January 26, 2023

Assigned Project: Training a Convolutional Neural Net to Detect Epileptic Spikes in EEG

It’s only been two weeks into the capstone project and I can tell it’s going to be an awesome quarter. Based on the format of this Capstone Course, we get to choose from a variety of projects and fill out a survey on why we should get the opportunity to work on said project. There were a lot of options, and even though the augmented reality (A.R.) proposals sparked an interest, as soon as I came across my current project, it was an immediate match. Even though initially I did not know what a Convolutional Neural Net was, I did now about EEGs, which stands for electroencephalographs. Electroencephalography is a medical technique of obtaining brain wave forms that represent electrical activity in the brain. As mentioned in the About Me section, I work in the field of Anesthesia, and we use EEGs on a daily basis to determine depth of anesthesia, so one could only imagine my excitement when the opportunity to combine my medical passion with my passion of programming and computer science.

Figure 1: BIS monitoring collects raw EEG data in order to calculate a numerical value that corresponds to a patient’s level of consciousness and depth of anesthesia

So perfect, I have an idea of what an EEG is, but I wasn’t too familiar with the Convolutional Neural Network portion, now I have a better idea, but it may still be a tad bit of a convolution. Convolutional Neural Networks are an architecture for deep learning that involves the sequential analysis and recognition of patterns used for classifying data. EEGs are of particular importance as they may contain a visual representation of seizures. It’s important for medical professional to be able to record this data and then process it in order to make the appropriate diagnosis and assign timely treatment. EEGs are traditionally read manually, a task that can be very time consuming as waveforms may contain data for several hours to several days. Deep Learning is a subcategory of Machine Learning which is a subcategory of Artificial Intelligence.

Figure 2: Diagram giving a physical representation of Deep Learning in relation to Machine Learning and Artificial Intelligence

It’s the goal of this project to advance a deep learning algorithm to be able to take raw EEG and specify blocks of time on the waveform that have a high probability of spike and/or sharp waves – waves associated with epileptic seizures. Many researchers have attempted to create programs that analyze raw EEG waveforms, but they have historically had poor accuracy when identifying characteristic spikes. This project involves collaborating with a software engineer from Bel Laboratories as they take on this project within their company, I will refer to him as our client. Our client has been working with the company for six years and has working experience with machine learning, and in order to get us up to speed, he shared with us an article from 2019: Automatic Analysis of EEGs using Big Data and Hybrid Deep Learning Architectures (Golmohammadi, et al., 2019). As the title of the article suggests, we will be using Big Data, which benefits our model of deep learning. Big Data refers to data that is large, quick, and variable which cannot be handled with traditional data analysis tools. To relate this to EEGs, EEGs can quickly add up in memory storage as several hours or days can be stored, quick because identification of spikes of interest should be done quickly, and variable because the data is in the form of images. We will be feeding our model a large amount of training data consisting of physician annotated EEG waveforms and then evaluate it’s performance with new unseen waveforms. What makes this task specifically challenging is that out of all the training data, which comes from the collection at Temple University, most of the data is background noise. For the average person with epilepsy, they are fortunately not always having a seizure, as a matter of fact, the previously mentioned article states that approximately 1% of the data represents seizure activities – the rest can be a result of artifacts or background noise which is nonsignificant. This project therefore holds a lot of promise should it work. It’ll take a lot of work, but we are willing to give it our best attempt. That’s right I’m not on my own on this mission.

Figure 3: EEG waveforms can be altered by background noise, artifacts, and even facial movements (e.g. eye movements). Image courtesy of

This project has brought together two more classmates which I am fortunate to have on my team. Both my team members have proven to be reliable and are willing to share their wealth of knowledge and experience. We all provide strengths that complement each other on this project, and I am looking forward to see our final project outcome. This project may seem rather daunting at first, but we are not on our own, in the previous paragraph, we also have our client. Even though our client was on vacation last week, he is back this week and seems to be willing to dedicate a good amount of time to foster our growth. We have setup weekly team meetings as well as times in which we will be on a live call with our client and he will be available to answer any questions as they come up. Keep an eye out on this blog and I’m pretty sure we’ll all be amazed by our accomplishments by the end of quarter.

What on deck: For the next upcoming meeting, a week from today, I have been assigned to look at a few EDB files, the format in which the EEGs are stored in the Temple University Events Corpus, and characterize what a spike and sharp wave look like. Seems like a rather simple task, but with only 1% of the data representing spikes, even with the annotations, I’ll have to fine tune my EEG reading ability in order to best refine the data to best train our model.

Figure 4: Convolutional Neural Networks involve taking input, performing feature extraction by identifying key components of the image which are then passed through a convoluted connected layer which together produce a probabilistic determination of the initial image. Diagram courtesy of