Category Archives: Electrical Engineering and Computer Science

Our Energy System in Transition: Pushing The Grid Towards Zero Emissions

Our climate in the next thirty years will not look the same as today, and that’s exactly why our energy systems will also soon look completely different. Energy systems are the big umbrella of how and where we create electricity, how we transport that electricity, and how we use electricity. We’re discussing the past and the future of our energy environment with Emily Richardson, a Masters of Engineering student in the Energy Systems Program.

Emily holding up a multi-colored sign with the words "FOR THE WATER WE DRINK".
Emily Richardson preparing for some good trouble

When our energy infrastructure was originally built, energy generation, transport, and usage was a one-way street. Utility companies made or acquired the electricity, built poles and wires to transport that electricity to then be used in homes and businesses. Although that infrastructure was only made to last 50 years, many are pushing 100 years of operation. 

If it ain’t broke, don’t fix it” some might say, but we’re not living in the same energy reality when the infrastructure was originally built. For in-depth visuals of our energy generation and usage, we recommend viewing Lawrence Livermore National Labs. Now we have a different energy portfolio (e.g. wind and solar) but there’s also a two-way street of electricity movement that is required. Rooftop solar helps power individual homes, but when zero to little energy is being used in-house and it’s sunny outside, that excess energy generation on your rooftop moves back upstream and can fulfill energy needs in other places. A two-way street is quickly being paved. It’s worth remembering that energy is on demand, meaning we only make exactly as much energy as what’s being used. If there is excess generation in a highly distributed way (i.e. home solar panels) it adds another level of complexity to our energy systems because there is no “overflow” valve for electricity.

Imagine if your toilet, that slowly moves water in one direction, was suddenly expected to move water in the other direction and back and forth as quick as the speed of light? Yikes indeed. City-wide plumbing infrastructure was bult to accommodate the most extreme events like the Super Bowl flush (when everyone in the city/state/country runs to the bathroom at halftime). While it’s an extreme circumstance, the infrastructure was built to prepare for it, and it works! But our energy systems were hardly made for this kind of reverse movement of energy, especially on a large scale as more people install rooftop solar.

Beyond the two-way street, there’s also rush hour to worry about. The UK is known for their tea; at a specific time after a popular TV show ends about one-million teakettles get turned on simultaneously. Without planning and foresight this would lead to an electricity shortage and people losing power. But the UK government imports 200-600 megawatts of energy, sometimes coming from a hydroelectric dam and/or nuclear energy, to accommodate their hot tea requirements. It’s surprisingly complicated to move this much power all at once, but with strategic planning there are solutions!

Everything in the energy world is physically connected. Even if the poles and wires and outlets are hidden behind walls there’s an immense amount of planning and design that you will never see because if infrastructure is working well, you can accidently forget its existence. When it fails, it can fail catastrophically. The 2020 Holiday Farm Fire in Oregon was initiated by downed powerlines, and the 2018 Paradise Fire in California was also initiated by malfunctioning powerlines. There are a multitude of reasons why those fires were especially damaging (location of ignition, exceptionally dry fuels, extreme wind events, drought and insect stressed trees, too many trees per acre, etc.), and why wildfires will get worse in the future (rising temperatures and changing precipitation patterns).

But our collective future requires energy, a lot of it, to be efficiently distributed and stored that requires a radical shift in our hardware, software, and maybe even our philosophy of energy usage. You don’t want to miss the discussion with Emily who will give us the deep dive on how we arrived at our energy reality and what our energy future will need to look like. This conversation is happening at 7pm on KBVR 88.7 FM, but you can also listen via the podcast feed.

Emily at the edge of a lake ready to begin kayaking
Emily Richardson preparing for some adventures on the kayak

Additional Notes
On air we mentioned a few resources that can provide more deep dives! The first is the Energy Gang Podcast that focuses on energy, clean technology, and the environment. The Big Switch Podcast is a five-part series on how the power grid works and how upcoming changes to the gird can help society. The Volts Podcast is an interview based show untangling our messy climate future and hopeful energy transitions. Emily mentioned a presentation titled Imagining a Zero Emissions Energy System.

The rigamarole of RNA, ribosomes, and machine learning

Basic biology and computer science is probably not an intuitive pairing to think of, when we think of pairs of scientific disciplines. Not as intuitive as say biology and chemistry (often referred to as biochem). However, for Joseph Valencia, a third year PhD student at OSU, the bridge between these two disciplines is a view of life at the molecular scale as a computational process in which cells store, transmit, and interpret the information necessary for survival. 

Think back to your 9th or 10th grade biology class content and you will (probably? maybe?) vaguely remember learning about DNA, RNA, proteins, and ribosomes, and much more. In case your memory is a little foggy, here is a short (and very simplified) recap of the basic biology. DNA is the information storage component of cells. RNA, which is the focus of Joseph’s research, is the messenger that carries information from DNA to control the synthesis of proteins. This process is called translation and ribosomes are required to carry out this process. Ribosomes are complex molecular machines and many of them can also be found in each of our cells. Their job is to interpret the RNA. The way this works is that they attach themselves to the RNA, they take the transcript of information that the RNA contains, interpret it and produce a protein. The proteins fold into a specific 3D shape and the shape determines the protein’s function. What do proteins do? Basically control everything in our bodies! Proteins make enzymes which control everything from muscle repair to eye twitching. The amazing thing about this process is that it is not specific to humans, but is a fundamental part of basic biology that occurs in basically every living thing!

An open reading frame (ORF) is a stretch of nucleotides beginning with a start codon and ending with a stop codon. Ribosomes bind to RNA transcripts and translate certain ORFs into proteins. The Kozak sequence (bottom right, from Wikipedia) depicts the nucleotides that commonly occur around the start codons of translated ORFs.

So now that you are refreshed on your high school biology, let us tie all of these ‘basics’ to what Joseph does for his research. Joseph’s research focuses on RNA, which can be broken down into two main groups: messenger  RNA (mRNA) and non-coding RNA. mRNA is what ends up turning into a protein following the translation by a ribosome, whereas with long non-coding RNA, the ribosome decides not to turn it into a protein. While we are able to distinguish between the two types of RNA, we do not  fully understand how a ribosome decides to turn one RNA (aka mRNA) into a protein, and not another (aka long non-coding RNA). That’s where Joseph and computer science come in – Joseph is building a machine learning model to try and better understand this ribosomal decision-making process.

Machine learning, a field within artificial intelligence, can be defined as any approach that creates an algorithm or model by using data rather than programmer specified rules. Lots of data. Modern machine learning models tend to  keep learning and improving when more data is fed to them. While there are many different types of machine-learning approaches, Joseph is interested in one called natural language processing . You are probably pretty familiar with an example of natural language processing at work – Google Translate! The model that Joseph is building is in fact not too dissimilar from Google Translate, or at least the idea behind it; except  that instead of taking English and translating it into Spanish, Joseph’s model is taking RNA and translating (or not translating) it into a protein. In Joseph’s own words, “We’re going through this whole rigamarole [aka his PhD] to understand how the ins [RNA & ribosomes] create the outs [proteins].”.

A high-level diagram of Joseph’s deep learning model architecture.

But it is not as easy as it sounds. There are a lot of complexities to the work because the thing that makes machine learning so powerful is that the exact complexities that gives these models the power that they have, also makes it hard to interpret why the model is doing what it is doing. Even a highly performing machine learning model may not capture the exact biological rules that govern translation, but successfully interpreting its learned patterns can help in formulating testable hypotheses about this fundamental life process.

To hear more about how Joseph is building this model, how it is going, and what brought him to OSU, listen to the podcast episode! Also, you can check out Joseph’s personal website to learn more about him & his work!

Mini-Molecules and Mighty Ideas

This week we have on the show Dr. Bo Wu – he recently graduated from Oregon State University with a Ph.D. from the Electrical Engineering department where he developed new sensors to monitor three different neurotransmitters that are correlated with our stress, mood, and happiness. Even though so much of our bodily functions rely on these neurotransmitters (cortisol, serotonin, dopamine), there are no current commercial or rapid techniques to monitor these tiny molecules. Since the majority of innovations in University settings never gets beyond the walls of the Ivory Tower, Bo wanted to design sensors with functionality and scalability in mind. Those basic principles are why Bo was attracted to joining the lab of Dr. Larry Cheng; instead of innovations sitting on university shelves their innovations must be designed to bring to market. Using nano-fabrications technology, Bo developed sensors that are about the size of a thumbnail to provide rapid and accurate measures of different neurotransmitters to be used outside the hospital setting. The promise of having these mini-molecules be measured as a point of care diagnostic (i.e. measured by the patient) is an exciting advancement in the medical field.

This innovation is not the only one coming from Bo; with the help of a colleague, they designed a product for researchers to easily reformat academic research papers for submission to other journals. If you didn’t know, submitting manuscripts to different journals takes an immense amount of time because of the formatting changes required. But these are tedious and can take a week or longer that can be used for crucial research experiments. While this service was originally designed for Engineering publications, the COVID-19 pandemic showed them there was a greater and more immediate need. With so many people losing their jobs, they re-designed the software to help people create and re-imagine their resumes for job applications. Their website, WiseDoc.net is now geared toward helping job seekers build stronger resumes, but Bo and his team expects to return to the original idea of re-formatting papers for academic publications but will expand to those beyond just Engineering journals. Thanks to Oregon State’s Advantage Accelerator Program, Bo and his co-founder were able to refine their product and acquire seed money to get the website off the ground, which now employs a small international team to maintain and improve its services. If you have questions for Bo about starting your own business, being an international student, or the Advantage Accelerator program, you can contact him by email wubo[at]oregonstate[dot]edu.

Did you miss the show on Sunday, you can listen to Bo’s episode on Apple Podcasts!

Learning without a brain

Instructions for how to win a soccer game:

Score more goals than your opponent.

Sounds simple, but these instructions don’t begin to explain the complexity of soccer and are useless without knowledge of the rules of soccer or how a “goal” is “scored.” Cataloging the numerous variables and situations to win at soccer is impossible and even having all that information will not guarantee a win. Soccer takes teamwork and practice.

Researchers in robotics are trying to figure out how to make a robot learn behaviors in games such as soccer, which require collaborative and/or competitive behaviors.

How then would you teach a group of robots to play soccer? Robots don’t have “bodies,” and instructions based on human body movement are irrelevant. Robots can’t watch a game and later try some fancy footwork. Robots can’t understand English unless they are designed to. How would the robots communicate with each other on the field? If a robot team did win a soccer game, how would they know?

Multiple robot systems are already a reality in automated warehouses.

Although this is merely an illustrative example, these are the types of challenges encountered by folks working to design robots to accomplish specific tasks. The main tool for teaching a robot to do anything is machine learning. With machine learning, a roboticist can give a robot limited instructions for a task, the robot can attempt a task many times, and the roboticist can reward the robot when the task is performed successfully. This allows the robot to learn how to successfully accomplish the task and use that experience to further improve. In our soccer example, the robot team is rewarded when they score a goal, and they can get better at scoring goals and winning games.

Programming machines to automatically learn collaborative skills is very hard because the outcome depends on not only what one robot did, but what all other robots did; thus it is hard to learn who contributed the most and in what way.

Our guest this week, Yathartha Tuladhar, a PhD student studying Robotics in the College of Engineering, is focused on improving multi-robot coordination. He is investigating both how to effectively reward robots and how robot-to-robot communication can increase success. Fun fact: robots don’t use human language communication. Roboticists define a limited vocabulary of numbers or letters that can become words and allow the robots to learn their own language. Not even the roboticist will be able to decode the communication!

 

Human-Robot collaborative teams will play a crucial role in the future of search and rescue.

Yathartha is from Nepal and became interested in electrical engineering as a career that would aid infrastructure development in his country. After getting a scholarship to study electrical engineering in the US at University of Texas Arlington, he learned that electrical engineering is more than developing networks and helping buildings run on electricity. He found electrical engineering is about discovery, creation, trial, and error. Ultimately, it was an experience volunteering in a robotics lab as an undergraduate that led him to where he is today.

Tune in on Sunday at 7pm and be ready for some mind-blowing information about robots and machine learning. Listen locally to 88.7FM, stream the show live, or check out our podcast.

Don’t just dream big, dream bigger

If you’ve purchased a device with a display (e.g. television, computer, mobile phone, handheld game console) in the last couple decades you may be familiar with at least some of the following acronyms: LCD, LED, OLED, Quantum LED – no, I did not make that up. Personally, I find it all a bit overwhelming and difficult to keep up with, as the evolution of displays is so rapidly changing. But until the display replicates an image that is indistinguishable from what we see in nature, there will always be a desire to make the picture more lifelike. The limiting factor of making displays appear realistic is the number of colors used to make the image. Currently, not all color wavelengths are used.

Akash conducting research on nanoparticles.

This week’s guest, Akash Kannegulla studies how light interacts with nanostructure metals for applications to advance display technology, as well as biosensing. Akash is a PhD candidate in the Electrical Engineering and Computer Science program with a focus in Materials and Devices in the Cheng Lab. Exploiting the physical and chemical properties of nanoparticles, Akash is able to work toward the advancement of display and biosensing technologies.

When shining light on metals, electrons and photons interact and oscillate to create a surface plasma, or “electron cloud”. Under specific conditions, when fluorescent dye is excited with UV light on the surface plasma, electrons move to higher atomic levels. When the electrons return to lower atomic levels, energy is released in the form of light. This light is 10-100X brighter than it would be without the use of fluorescent dyes. With this light magnification, less voltage is used to produce a comparable brightness level. This has two main benefits; first consumer products can use less energy to produce the same visual experience, so we can significantly decrease our carbon footprint. Second, these unique conditions can be amplified at the nano-scale, which means smaller pixels and more colors that can be produced so our TV screens will look more and more like the real world around us. These new advancements at the nano-scale have extremely tight tolerances in order for it to work; however, in this case, not working can also provide some incredible information.

This technology can be applied in biosensing to detect mismatches in DNA sequences. A ‘mismatch’ in a DNA sequence has a slightly different chemical bond, the distance between the atoms is ever so slightly different than what is expected, but that tiny difference can be detected by how intense the light is – again the nanoscale is frustratingly finnicky at how precise the conditions must be in order to get the expected response – in this case light intensity. So when we get a ‘dim’ spot, it can be indicative of a mismatched DNA segment! Akash predicts that in a just a few years, this nanotechnology will make single nucleic acid differentiations detectable on with sensing technology on a small chip or using a phone camera, rather than a machine half the size of MINI Cooper.

Akash, the entrepreneur, with his winning certificate for the WIN Shark Tank 2018 competition.

In addition to Akash’s research, he has spent a significant portion of his graduate career investing in an award-winning start-up company, Wisedoc.This project was inspired by the frustration Akash felt, and probably all graduate students and researchers, when trying to publish his own work and found himself spending too much time formatting and re-formatting rather than conducting research. By using Wisedoc, you can input your article content into the program and select a journal of interest. The program will then format your content to the journal’s specifications, which are approved by the respective journal’s editors to make publishing academic articles seamless. If you want to submit to another journal, it only takes a click to update the formatting. Follow this link for a short video on how Wisedoc works. And for those of us with dissertations to format, no worries – Wisedoc will have an option for that, too. Akash notes that Wisedoc would not have been possible without the help of OSU’s Advantage Accelerator program, which guides students, faculty, staff, as well as the broader community through the start-up process. Akash’s team has won the Willamette Innovators Network 2018 Shark Tank competition, which earned them an entry into the Willamette Angel Conference, where Wisedoc won the Speed Pitch competition. If you are as eager as I am to checkout Wisedoc, the launch is only a few months away in December 2018!

The soon-to-be Dr. Akash Kannegulla – his defense is only a month away – is the first person in decades from his small town at the outskirts of Hyberabad, India, to attend graduate school. Akash’s start in engineering was inspired by his uncle, an achieved instrumentation scientist. Not knowing where to start, Akash adopted his uncle’s career choice as an engineer, but took the time to thoroughly explore his specialty options while an undergraduate. A robotics workshop at his undergraduate institution, Amirta School of Engineering in Bangalore, India, sparked an interest in Akash due to the hands-on nature of the science. Akash explored undergraduate research opportunities in the United States landing on a Nano Undergraduate Research Fellowship from University of Notre Dame. During the summer of 2013, Akash studied photo induced re-configurable THz circuits and devices under the guidance of Dr. Larry Cheng and Dr. Lei Liu. Remarkably, Akash conducted research resulting in a publication after only participating in this four-week fellowship. After graduating with the Bachelor of Technology in Instrumentation, Akash decided to come to Oregon State University to continue working with Dr. Cheng as a PhD student.

After defending, Akash will be working at Intel Hillsboro, as well as preparing for the launch of Wisedoc in December. And if that doesn’t sound like enough to keep him busy, Akash has plans for two more start-ups in the works.

Join us on Sunday, July 22 at 7 PM on KBVR Corvallis 88.7 FM or stream live to learn more about Akash’s nanotechnology research, start-up company, and to get inspired by this go-getter.