I, Roboethicist

This week we have Colin Shea-Blymyer, a PhD student from OSU’s new AI program in the departments of Electrical Engineering and Computer Science, joining us to talk about coding computer ethics. Advancements in artificial intelligence (AI) are exploding, and while many of us are excited for a world where our Roomba’s evolve into Rosie’s (á la The Jetsons) – some of these technological advancements require grappling with ethical dilemmas. Determining how these AI technologies should make their decisions is a question that simply can’t be answered, and is best left to be debated by the spirits of John Stewart Mill and Immanual Kant. However, as a society, we are in dire need of a way to communicate ethics in a language that machines can understand – and this is exactly what Colin is developing.

Making An Impact: why coding computer ethics matters

A lot of AI is developed through machine learning – a process where software becomes more accurate without being explicitly told to do so. One example of this is through image recognition softwares. By feeding these algorithms with more and more photos of a cat – it will get better at recognizing what is and isn’t a cat. However, these algorithms are not perfect. How will the program treat a stuffed animal of a cat? How will it categorize the image of a cat on a t-shirt? When the stakes are low, like in image recognition, these errors may not matter as much. But for some technology being correct most of the time isn’t sufficient. We would simply not accept a pace-maker that operates correctly most of the time, or a plane that doesn’t crash into the mountains with just 95% certainty. Technologies that require a higher precision for safety also require a different approach to developing that software, and many applications of AI will require high safety standards – such as with self-driving cars or nursing robots. This means society is in need of a language to communicate with the AI in a way that it can understand ethics precisely, and with 100% accuracy. 
The Trolley Problem is a famous ethical dilemma that asks: if you are driving a trolley and see that it is going to hit and kill five pedestrians, but you could pull a lever to reroute the trolley to instead hit and kill one pedestrian – would you do it? While it seems obvious that we want our self-driving cars to not hit pedestrians, what is less obvious is what the car should do when it doesn’t have a choice but to hit and kill a pedestrian or to drive off a cliff killing the driver. Although Colin isn’t tackling the impossible feat of solving these ethical dilemmas, he is developing the language we need to communicate ethics to AI with the accuracy that we can’t achieve from machine learning. So who does decide how these robots will respond to ethical quandaries? While not part of Colin’s research, he believes this is best left answered by the communities the technologies will serve.

Colin doing a logical proof on a whiteboard with a 1/10 scale autonomous vehicle in the foreground.

The ArchIve: a (brief) history of AI

AI had its first wave in the 70’s, when it was thought that logic systems (a way of communicating directly with computers) would run AI. They also created perceptrons which try to mimic a neuron in a brain to put data into binary classes, but more importantly, has a very cool name. Perceptron! It sounds like a Spider-Man villain. However, logic and perceptrons turned out to not be particularly effective. There are a seemingly infinite number of possibilities and variables in the world, making it challenging to create a comprehensive code. Further, when AI has an incomprehensive code, it has the potential to enter a world it doesn’t know could even exist – and then it EXPLODES! Kind of. It enters a state known as the Principle of Explosion, where everything becomes true and chaos ensues. These challenges with using logic to develop AI led to the first “AI winter”. A highly relatable moment in history given the number of times I stop working and take a nap because a problem is too challenging. 

The second wave of AI blew up in the 80’s/90’s with the development of machine learning methods and in the mid-2000’s it really took off due to software that can handle matrix conversions rapidly. (And if that doesn’t mean anything to you, that’s okay. Just know that it basically means speedy complicated math could be achieved via computers). Additionally, high computational power means revisiting the first methods of the 70’s, and could string perceptrons together to form a neural network – moving from binary categorization to complex recognition.

A bIography: Colin’s road to coding computer ethics

During his undergrad at Virginia Tech studying computer science, Colin ran into an ArachnId that left him bitten by a philosophy bug. This led to one of many philosophical dilemmas he’d enjoy grappling with: whether to focus his studies on computer science or philosophy? And after reading I, Robot answered that question with a “yes”, finding a kindred spirit in the robopsychologist in the novel. This led to a future of combining computer science with philosophy and ethics: from his Master’s program where he weaved computer science into his philosophy lab’s research to his current project developing a language to communicate ethics to machines with his advisor Hassam Abbas. However, throughout his journey, Colin has become less of a robopsychologist and more of a roboethicist.

Want more information on coding computer ethics? Us too. Be sure to listen live on Sunday, April 17th at 7PM on 88.7FM, or download the podcast if you missed it. Want to stay up to date with the world of roboethics? Find more from Colin at https://web.engr.oregonstate.edu/~sheablyc/.

Colin Shea-Blymyer: PhD student of computer science and artificial intelligence at Oregon State University

This post was written by Bryan Lynn.

The rigamarole of RNA, ribosomes, and machine learning

Basic biology and computer science is probably not an intuitive pairing to think of, when we think of pairs of scientific disciplines. Not as intuitive as say biology and chemistry (often referred to as biochem). However, for Joseph Valencia, a third year PhD student at OSU, the bridge between these two disciplines is a view of life at the molecular scale as a computational process in which cells store, transmit, and interpret the information necessary for survival. 

Think back to your 9th or 10th grade biology class content and you will (probably? maybe?) vaguely remember learning about DNA, RNA, proteins, and ribosomes, and much more. In case your memory is a little foggy, here is a short (and very simplified) recap of the basic biology. DNA is the information storage component of cells. RNA, which is the focus of Joseph’s research, is the messenger that carries information from DNA to control the synthesis of proteins. This process is called translation and ribosomes are required to carry out this process. Ribosomes are complex molecular machines and many of them can also be found in each of our cells. Their job is to interpret the RNA. The way this works is that they attach themselves to the RNA, they take the transcript of information that the RNA contains, interpret it and produce a protein. The proteins fold into a specific 3D shape and the shape determines the protein’s function. What do proteins do? Basically control everything in our bodies! Proteins make enzymes which control everything from muscle repair to eye twitching. The amazing thing about this process is that it is not specific to humans, but is a fundamental part of basic biology that occurs in basically every living thing!

An open reading frame (ORF) is a stretch of nucleotides beginning with a start codon and ending with a stop codon. Ribosomes bind to RNA transcripts and translate certain ORFs into proteins. The Kozak sequence (bottom right, from Wikipedia) depicts the nucleotides that commonly occur around the start codons of translated ORFs.

So now that you are refreshed on your high school biology, let us tie all of these ‘basics’ to what Joseph does for his research. Joseph’s research focuses on RNA, which can be broken down into two main groups: messenger  RNA (mRNA) and non-coding RNA. mRNA is what ends up turning into a protein following the translation by a ribosome, whereas with long non-coding RNA, the ribosome decides not to turn it into a protein. While we are able to distinguish between the two types of RNA, we do not  fully understand how a ribosome decides to turn one RNA (aka mRNA) into a protein, and not another (aka long non-coding RNA). That’s where Joseph and computer science come in – Joseph is building a machine learning model to try and better understand this ribosomal decision-making process.

Machine learning, a field within artificial intelligence, can be defined as any approach that creates an algorithm or model by using data rather than programmer specified rules. Lots of data. Modern machine learning models tend to  keep learning and improving when more data is fed to them. While there are many different types of machine-learning approaches, Joseph is interested in one called natural language processing . You are probably pretty familiar with an example of natural language processing at work – Google Translate! The model that Joseph is building is in fact not too dissimilar from Google Translate, or at least the idea behind it; except  that instead of taking English and translating it into Spanish, Joseph’s model is taking RNA and translating (or not translating) it into a protein. In Joseph’s own words, “We’re going through this whole rigamarole [aka his PhD] to understand how the ins [RNA & ribosomes] create the outs [proteins].”.

A high-level diagram of Joseph’s deep learning model architecture.

But it is not as easy as it sounds. There are a lot of complexities to the work because the thing that makes machine learning so powerful is that the exact complexities that gives these models the power that they have, also makes it hard to interpret why the model is doing what it is doing. Even a highly performing machine learning model may not capture the exact biological rules that govern translation, but successfully interpreting its learned patterns can help in formulating testable hypotheses about this fundamental life process.

To hear more about how Joseph is building this model, how it is going, and what brought him to OSU, listen to the podcast episode! Also, you can check out Joseph’s personal website to learn more about him & his work!

Microbial and biochemical community dynamics in low-oxygen Oregon waters

Much like Oregon’s forests experience wildfire seasons, the waters off the Oregon coast experience what are called “hypoxia seasons”. During these periods, which occur in the summer, northern winds bring nutrient-rich water to the Eastern Current Boundary off the Oregon Coast. While that might sound like a good thing, the upwells bring a bloom of microscopic organisms such as phytoplankton that consume these nutrients and then die off. As they die off, they sink and are then decomposed by marine microorganisms. This process of decomposition removes oxygen from the water, creating what’s called an oxygen minimum zone, or OMZs. These OMZs can span thousands of square miles. While mobile organisms such as fish can escape these areas and relocate, place-bound creatures such as crabs and bottom-dwelling fish can perish in these low oxygen zones. While these hypoxia seasons can occur due to natural phenomena, stratification of the water column due to other factors such as climate change can increase the frequency or severity of these seasons.

2021 was one of the worst years on record for hypoxic waters off the Western coast of the United States. A major contributing factor was the extremely early start to the upwelling triggered by strong winds. Measurements of dissolved oxygen and ocean acidity were high enough to be consistent with conditions that can lead to dead zones, and this is exactly what happened. Massive die-offs of crabs are concerning as the harvesting of Dungeness crab is one of the most lucrative fishing industries in the state. Other species and organisms move into shallower waters, disturbing the delicate balance of the coastal ecosystems. From the smallest microbe to the largest whale, almost every part of the coast can be affected by hypoxia season. 

Our guest this week is Sarah Wolf, a fourth year PhD candidate in the Department of Microbiology here at Oregon State. Sarah, who is co-advised by Dr. Steve Giovannoni and Dr. Francis Chan, studies how microbes operate in these OMZs. Her work centers around microbial physiology and enzyme kinetics, and how these things change over time and in varying oxygen concentrations. To do this, she spent her second year developing a mesocosm, which is a closed environment that allows for the study of a natural environment, which replicates conditions found in low oxygen environments. 

Sarah Wolf, a fourth year PhD Candidate in the department of Microbiology, in her lab

Her experiments involve hauling hundreds of liters of ocean water from the Oregon coast back to her lab in Nash Hall, where she filters and portions it into different jugs hooked up to a controlled gas delivery system which allows her to precisely control the concentration of oxygen in the mesocosm. Over a period of four months Sarah samples the water in these jugs to look at the microbial composition, carbon levels, oxygen respiration rates, cell counts, and other measures of the biological and chemical dynamics occurring in low oxygen. Organic matter can get transformed by different microorganisms that “eat” different pieces through the use of enzymes, but many enzymes which can break down large, complex molecules require oxygen, and in low oxygen conditions, this can be a problem for the breakdown and accumulation of organic matter. This is the kind of phenomenon that Sarah is studying in these mesocosms, which her lab affectionately refers to as the “Data Machine”. 

Sarah’s journey into science has been a little nontraditional. A first generation college student, she started out her education as a political science major at Montana State before moving to the University of the Virgin Islands for a semester abroad. At the time she wasn’t really sure how to get into research or science as a career. During this semester her interest in microbiology was sparked during an environmental science course which led to her first research experience, studying water quality in St. Thomas. This experience resulted in an award-winning poster at a conference, and prompted Sarah to change her major to Microbiology and transfer to California State University Los Angeles. Her second research experience was very different – an internship at NASA’s Jet Propulsion Laboratory studying cleanroom microbiology, which resulted in a publication identifying two novel species of Bacillus isolated from the Kennedy Space Center. Ultimately Sarah’s journey brought her here to Oregon State, which she was drawn to because of its strong marine microbiology research program.

Sarah works on the “Data Machine”

But Sarah’s passion for science doesn’t stop at the lab: during the Covid-19 pandemic, she began creating and teaching lessons for children stuck at home. During this time she taught over 60 kids remotely, with lessons about microbes ranging from marine microbiology to astrobiology and even how to create your own sourdough starter at home. Eventually she compiled these lessons onto her website where parents and teachers alike can download them for use in classrooms and at home. She also began reviewing children’s science books on her Instagram page (@scientist.sarahwolf), and inviting experts in different fields to participate in livestreams about books relating to their topics. A practicing Catholic, she also shares thoughts and resources about religion and science, especially topics surrounding climate science. With around 12k followers, Sarah’s outreach on Instagram has certainly found its audience, and will only continue to grow. 

If you’re curious about microbes in low oxygen conditions, what it’s like to be a science educator and social media influencer, or want to hear more about Sarah’s journey in her own words, tune in at 7 PM on March 13th to catch the live episode at 7 PM PST on 88.7 FM Corvallis, online at https://kbvrfm.orangemedianetwork.com – or you can catch this episode after the show airs wherever you get your podcasts! 

Imaging nuclear fallout with a camera and a scintillating crystal

Our guest this week, Dr. Ari Foley, is a recent (July 2021) OSU graduate from the School of Nuclear Science and Engineering. For her PhD research, she developed a rapid imaging method for post-detonation nuclear forensics. While methods to do this work already exist, a lot of them are time- and material-intensive. Therefore, the goal of Ari’s work was to develop a method that could inform optimized destructive analysis of samples after a detonation event of a nuclear weapon, with a particular focus on reducing the amount of imaging time required. Not only was Ari able to accomplish this task but the system she developed is able to take an image of the spatial distribution of radiation omitted from an object in the same exposure as taking a traditional photograph of the object being analyzed (see Image below). How in the world did Ari do this? Read below for a short synopsis or even better listen to the episode here!

A core component of Ari’s system was an electron-magnifying charged couple device, also known as an EMCCD. The CCD part of that is essentially a normal camera but the EM part magnifies the signal collected from whatever the camera is pointed at. Ari rigged an inorganic scintillation crystal to the EMCCD, which sits in a 3D-printed holder just in front of the camera. The purpose of the crystal is that once it is held in close proximity to radioactive fallout material from a detonation, the radiation interacts with the crystal, which leads to the emission of light. This light is proportional to the amount of energy that is imparted within the crystal. The EM part of the EMCCD kicks in as the image is taken as it allows for a high intensity image to be made that magnifies the light emitted from the crystal interacting with the radiation. This process needs to occur in light tight box, however it is mobile, meaning that it can easily be taken into the field and directly be used at a nuclear detonation site to measure the intensity of radiation of fallout material.

Ari spent the last three years of her PhD time in Idaho at the Idaho National Laboratory (INL), which is one of the leading nuclear research labs in the USA and has close ties with OSU. In fact, Ari was one of two students in the inaugural class of INL Graduate Fellows, which enabled her to conduct this work while working full-time at the lab. However, Ari’s career may have gone down a very different path because she had always wanted to be an Arts student or pursue a career in human rights. However, during a summer school experience during her high school years, Ari attended a class on Indigenous Peoples and the United Nations. During this class, the students took a trip to the United Nations General Assembly Building in New York, which hosts a statue from Hiroshima, Nagasaki. The statue is of a woman holding a lamb, which from the front, looks completely normal. However, when you walk around to the back of the statue, the statue is completed charred and scarred – a consequence of the atomic bomb. The same class presented case studies of radiation contamination on tribal reservations in the USA. Seeing and learning these things really riled Ari up at the time because while she had been interested by radiation in chemistry class, she was suddenly confronted by the fact that radiation contamination were actual ongoing world issues. 

Listen to the podcast episode here to learn more about the nitty-gritty of how Ari developed her nuclear forensic system, how she prevented from getting radiation in the lab, and her road to OSU!

How to help rusty plants

Plants can get rusty. No joke! There is a fungal pathogen called rust which can cover the leaves of plants. This is problematic given that leaves are the site where photosynthesis occurs, the process whereby plants use sunlight to synthesize foods from carbon dioxide and water in order to grow. While a plant may still be able to photosynthesize if the leaves contain just a little bit of rust, the more and more rust spreads across the leaves, the less and less surface area there is for photosynthesis to occur. When you get rust on a metallic item, there are several home remedies you can try to remove the rust, such as baking soda or a vinegar bath. But do plants have rust-removal options too? Possibly…and it’s what our guest this week, PhD student Maria-Jose Romero-Jimenez (or Majo), is trying to figure out.

Majo, who is in her second year of graduate school in the Department of Botany and Plant Pathology here at OSU, is using black cottonwood as her plant study species. Black cottonwood is a Pacific Northwest native which has many uses, including the paper industry in Oregon. Recently, the Department of Energy has also listed the black cottonwood as a plant of interest for its potential use as a biofuel. As you can imagine, with this much large-scale interest in black cottonwood, there is also huge interest in understanding how it is affected by disease and pathogens, and what can potentially be done to prevent pathogens, such as rust, from spreading.

Yeast diversity panel

Fortunately, it seems like black cottonwood has a natural ally that helps it fend off rust – yeast! Majo’s main research goal is to figure out what yeast colonies are able to prevent rust infestation of black cottonwood plants. While this task may sound relatively straightforward, it sure isn’t. Majo’s work involves both field and lab work and is started in the fall of 2020 when Majo had to isolate yeast colonies from a bunch of leaves that were collected in the PNW (primarily Washington and Oregon). This work resulted in isolating almost 400 yeast colonies, from which Majo had to select a subset to grow in the lab. Meanwhile, baby black cottonwood plants needed to be propagated, potted, and cared for to ensure that Majo would have enough grown plants for her first round of greenhouse experiments. These experiments involved a series of treatments with different combinations of yeast colonies that were applied to the black cottonwood plants before being sprayed with rust to see how plants in different yeast treatments would do.

Curious to know what the results of Majo’s first round of experiments was and what the next steps are? You can download the episode anywhere you get your podcasts! Also, check out Majo’s Instagram (@fungibrush) for some educational videos on how she conducts her research (as well as a lesson in Spanish!). 

Trusting Your Gut: Lessons in molecular neuroscience and mental health

The bacteria in your gut can talk to your brain.

No, really.

It might sound like science fiction, but you’ve probably heard the phrase ‘gut-brain axis’ used in recent years to describe this phenomenon. What we call the “gut” actually refers to the small and large intestines, where a collection of microorganisms known as the gut microbiome reside. In addition to the microbes that inhabit it, your gut contains around 500 million neurons, which connect to your brain through bidirectional nerves – the biggest of which is the vagus nerve. Bacteria might be able to interact with specialized sensory cells within the gut lining and trigger neuronal firing from the gut to the brain.

Our guest this week is Caroline Hernández, a PhD student in the Maude David Lab in the Department of Microbiology, and she is studying exactly this phenomenon. While the idea that the gut and the brain are connected is not exactly new (ever heard the phrase “a gut feeling” or felt “butterflies” in your gut when you’re nervous?), there still isn’t much known about how exactly this works on a molecular level. This is what Caroline’s work aims to untangle, using an in vitro  (which means outside of a living organism – in this case, cells in a petri dish) approach: if you could grow both the sensory gut cells and neurons in the same petri dish, and then expose them to gut bacteria, what could you observe about their interactions? 

Caroline Hernández in her lab at Oregon State, using a stereo microscope to identify anatomical structures in a mouse before dissecting out a nerve bundle

The answer to this question could tell us a lot about how the gut-brain axis works on a molecular level, and could help researchers understand the mechanisms by which the gut microbiome can possibly modulate behavior, mood, learning, and cognition. This could have important implications down the line for how we conceptualize and potentially treat mood and behavioral disorders. Some mouse studies have already shown that mice treated with the probiotic Lactobacillus rhamnosus display reduced anxiety-like and depressive behaviors, for example – but exactly how this works isn’t really clear.

The challenges of in vitro research

Before these mechanisms can really be untangled, there are several challenges that Caroline is working on solving. The biggest one is just getting the cells to grow at all: Caroline and her team must first carefully extract specific gut sensory tissue and a specific ganglion (which is a blob of neurons) from mice, a delicate process that requires the use of specialized tools and equipment. Once they’ve verified that they have the correct anatomy, the tissues are moved into media, a liquid that contains specialized nutrients to help provide the cells with the growth factors they need to stay alive. Because this is very cutting-edge research, Caroline’s team is among the first in the world to attempt this technique – meaning there is a lot of trial and error and not a great amount of resources out there to help. There have been a number of hurdles along the way, but Caroline is no stranger to meeting challenges head-on and overcoming them with incredible resilience.

From art interactions to microbial interactions

Her journey into science started in a somewhat unexpected way: Caroline began her undergraduate career as a studio art major in community college. Her art was focused on interactivity and she was especially interested in how the person perceiving the art could interact with and explore it. Eventually she decided that while she was quite skilled at it, art was not the career path she wanted to pursue, so she switched into science, where she began her Bachelors of Science in molecular and cellular biology at the University of Illinois in Urbana Champaign. 

During her undergraduate degree, a mental health crisis prompted Caroline to file for a medical withdrawal from her program. The break was much needed and allowed her to focus on taking care of herself and her health before returning to the rigorous and intense program three years later. Caroline is now a strong supporter of mental health resource awareness – in this episode of Inspiration Dissemination she will describe some of the challenges and barriers she faced when returning to finish her degree, and some of the pushback she faced when deciding to pursue a PhD. 

“Not everyone was supportive,” she says. “I didn’t receive great encouragement from some of my advisors.”

Where she did find support and community was in her undergraduate research lab. Her work in this lab on the effects of diet and the microbiome on human health gave her the confidence to pursue graduate school, demonstrating that she was more than capable of engaging in independent research. In particular Caroline recalls her mentor Leila Shinn, a PhD student at the time in that lab, who had a profound impact on her decision to apply to graduate programs.

Tune in on Feb 27th to hear the rest of Caroline’s story and what brought her to Oregon State in particular. You can listen live at 7 PM PST on 88.7 FM Corvallis, online at https://kbvrfm.orangemedianetwork.com, or you can catch the episode after the show airs wherever you get your podcasts. 

If you are an undergraduate student or graduate student at Oregon State University and are experiencing mental health struggles, you’re not alone and there are resources to help. CAPS offers crisis counseling services as well as individual therapy and support and skill-building groups. 

Home Economics as a Science 

Milam, ca. 1919. Courtesy of Oregon Digital.

At OSU there is a building called Milam Hall. It sits across the quad from the Memorial Union and houses many departments, including the School of History, Philosophy, and Religion, where our guest this week, History and Philosophy of Science M.A. student Kathleen McHugh is housed. The building is certainly showing its age, with a perpetually leaky roof and well worn stairwells. But despite this, embedded in some of its classrooms, are hints of its former glory. It was once the location of the School of Home Economics, and was posthumously named after its longstanding dean, Ava B. Milam. While no books have been written about Milam, aside from her own autobiography, her story is one worth telling, and McHugh is doing just that with her M.A. thesis where she explores Milam’s deliberate actions to make home economics a legitimate scientific field. 

Home Economics students cooking. Courtesy of Industrial-Arts Magazine.

During Milam’s tenure, home economics was a place where women could get an education and, most importantly, where they would not interfere with men’s scientific pursuits. It necessarily othered women and excluded them from science. But McHugh argues that Milam actively tried to shape home economics so that it was perceived as a legitimate science rather than a field of educational placation. And, as McHugh demonstrates through her research, in part due to Milam’s work, women are able to study science today without prejudice (well, for the most part. Obviously there is still a long way to go before there is full equality). 

But exactly how Milam legitimized a field that–let’s be honest, probably immediately gives readers flashbacks of baking a cake in middle school or learning how to darn a sock –is exactly what McHugh explores in her thesis. Through meticulous archival research, and despite COVID hurdles, McHugh has created a compelling and persuasive narrative of Milam’s efforts to transform home economics into a science. 

Guests waiting outside a tearoom at the 1919 San Francisco World’s Fair run by Home Economics students. Courtesy of Industrial-Arts Magazine.

Listen this week and learn how a cafe at the 1915 San Francisco World’s Fair and a house near campus that ran a nearly 50 year adoption service relate to Milam and her pioneering work. If you missed the live show, listen to this episode wherever you get your podcasts.

Mighty (a)morphin’ power metals

This week we have a PhD candidate from the materials science program, Jaskaran Saini, joining us to discuss his work on the development of novel metallic glasses. But first, what exactly is a metallic glass, you may ask? Metallic glasses are metals or alloys with an amorphous structure. They lack crystal lattices and crystal defects commonly found in standard crystalline metals. To form a metallic glass requires extremely high cooling rates. Well, how high? – a thousand to a million Kelvin per second! That high.

The idea here is that the speed of cooling impacts the atomic structure – and this idea is not new or limited to just metals! For example, the rocks granite, basalt, pumice, and obsidian all have a similar composition, but different cooling times. This even gives Obsidian an amorphous structure, which means we could probably just start referring to it as rocky glass. But the uses of metallic glass extend far beyond those of rocks.

(Left) Melting the raw materials inside the arc-melter to make the alloy. The bright light visible in the image is the plasma arc that goes up to 3500C. The ring that the arc is focusing on is the molten alloy.
(Right) Metallic glass sample as it comes out of the arc-melter; the arc melter can be seen in the background.
Close-ups of metallic glass buttons.

Why should we care about metallic glass? 

Metallic glasses are fundamentally cool, but in case that isn’t enough to peak your attention, they also have super powers that’d make Magneto drool. They have 2-3x the strength of steel, are incredibly elastic, have very high corrosion and wear resistance and have a mirror-like surface finish. So how can we apply these super metals to science? Well, NASA is already on it and is beginning to use metallic glasses as gear material for motors. While the Curiosity rover expends 30% of its energy and 3 hours heating and lubricating its steel gears to operate, Curiosity Jr. won’t have to worry about that with metallic glass gears. NASA isn’t the only one hopping onto the metallic glass train. Apple is trying to use these scratch proof materials in iPhones, the US Army is using high density hafnium-based metallic glasses for armor penetrating military applications, and some professional tennis and golf players have even used these materials in their rackets and golf clubs. But it took a long time to get these metallic glasses to the point where they’re now being used in rovers and tennis rackets.

Metallic glass: a history

Metallic glasses first appeared in the 1960’s when Jaskaran’s academic great grandfather (that is, his advisor’s advisor’s advisor), Pol Duwez, made them at Caltech. In order to achieve this special amorphous structure, a droplet of a gold-silicon alloy was cooled at a rate of over a million Kelvin per second with the end result being an approximately quarter sized foil of metallic glass, thinner than the thickness of a strand of hair. Fast forward to the ‘80’s, and researchers began producing larger metallic glasses. By the late ‘90’s and early 2000’s, the thickness of the biggest metallic glass produced had already exceeded 1000x the original foil thickness. However, with great size comes greater difficulty! If the metallic glass is too thick, it can’t cool fast enough to achieve an amorphous structure! Creating larger pieces of metallic glass has proven itself to be extremely challenging – and therefore is a great goal to pursue for graduate students and PI’s interested in taking on this challenge.

Currently, the largest pieces of metallic glasses are around 80 mm thick, however, they use and are based on precious metals such as palladium, silver, gold, platinum and beryllium. This makes them not very practical for multiple reasons. First, is the more obvious cost standpoint. Second, given the detrimental impact of mining rare-earth metals, efforts to minimize dependence on rare-earth metals can have a great positive impact on the environment. 

World records you probably didn’t know existed until now

As part of Prof. Donghua Xu’s lab, Jaskaran is working on developing large-sized metallic glasses from cheaper metals, such as copper, nickel, aluminum, zirconium and hafnium. It’s worth noting that although Jaskaran’s metallic glasses typically consist of at least three metal elements, his research is mainly focused on producing metallic glasses that are based on copper and hafnium (these two metals are in majority). Not only has Jaskaran been wildly successful in creating glassy alloys from these elements, but he has also set TWO WORLD RECORDS. The previous world record for a copper-based metallic glass was 25 mm, which he usurped with the creation of a 28.5 mm metallic glass. As for hafnium, the previous world record was 10 mm which Jaskaran almost doubled with a casting diameter of 18 mm. And mind you, these alloys do not contain any rare-earth or precious metals so they are cost-effective, have incredible properties and are completely benign to the environment!

The biggest copper-based metallic glass ever produced (world record sample).

Excited for more metallic glass content? Us too. Be sure to listen live on Sunday February 6th at 7PM on 88.7FM, or download the podcast if you missed it. Want to stay up to date with the world of metallic glass? Follow Jaskaran on Twitter, Instagram or Google Scholar. We also learned that he produces his own music, and listened to Sephora. You can find him on SoundCloud under his artist name, JSKRN.

Jaskaran Saini: PhD candidate from the materials science program at Oregon State University.

This post was written by Bryan Lynn and edited by Adrian Gallo and Jaskaran Saini.

Nuclear: the history, present, and future of the solution to the energy crisis

In August of 2015, the Animas River in Colorado turned yellow almost overnight. Approximately three million gallons of toxic waste water were released into the watershed following the breaching of a tailings dam at the Gold King Mine. The acidic drainage led to heavy metal contamination in the river reaching hundreds of times the safe limits allowed for domestic water, having devastating effects on aquatic life as well as the ecosystems and communities surrounding the Silverton and Durango area. 

This environmental disaster was counted by our guest this week, Nuclear Science and Engineering PhD student Dusty Mangus, as a close-to-home critical moment in inspiring what would become his pursuit of an education and career in engineering. “I became interested in the ways that engineering could be used to develop solutions to remediate such disasters,” he recalls.

Following his BS of Engineering from Fort Lewis College in Durango, Colorado, Dusty moved to the Pacific Northwest to pursue his PhD in Nuclear Engineering here at Oregon State, where he works with Dr. Samuel Briggs. His research here focuses on an application of engineering to solve one of the biggest problems of our age: energy – and more specifically, the use of nuclear energy. Dusty’s primary focus is on using liquid sodium as an alternative coolant for nuclear reactors, and the longevity of various materials used to construct vessels for such reactors. But before we can get into what that means, we should define a few things: what is nuclear energy? Why is nuclear energy a promising alternative to fossil fuels? And why does it have such an undeserved bad rap?

Going Nuclear

Nuclear energy comes from breaking apart the nuclei of atoms. The nucleus is the core of the atom and holds an enormous amount of energy. Breaking apart atoms, also called fission, can be used to generate electricity. Nuclear reactors are machines that have been designed to control the process of nuclear fission and use the heat generated by this reaction to power generators, which create electricity. Nuclear reactors typically use the element uranium as the fuel source to produce fission, though other elements such as thorium could also be used. The heat created by fission then warms the coolant surrounding the reaction, typically water, which then produces steam. The United States alone has more than 100 nuclear reactors which produce around 20% of the nation’s electricity; however, the majority of the electricity produced in the US is from fossil fuels. This extremely potent energy source almost fully powers some nations including France and Lithuania. 

One of the benefits of nuclear energy is that unlike fossil fuels, nuclear reactors do not produce carbon emissions that contribute to the accumulation of greenhouse gases in the atmosphere. In addition, unlike other alternative energy sources, nuclear plants can support the grid 24/7: extreme weather or lack of sunshine does not shut them down. They also take up less of a footprint than, say, wind farms.  

However, despite their benefits and usefulness, nuclear energy has a bit of a sordid history which has led to a persistent, albeit fading in recent years, negative reputation. While atomic radiation and nuclear fission were researched and developed starting in the late 1800s, many of the advancements in the technology were made between 1939-1945, where development was focused on the atomic bomb. First generation nuclear reactors were developed in the 1950s and 60s, and several of these reactors ran for close to 50 years before decommission. It was in 1986 the infamous Chernobyl nuclear disaster occurred: a flawed reactor design led to a steam explosion and fires which released radioactive material into the environment, killing several workers in the days and weeks following the accident as a result of acute radiation exposure. This incident would have a decades-long impact on the perception of the safety of nuclear reactors, despite the significant effect of the accident on reactor safety design. 

Nuclear Reactor Safety

Despite the perception formed by the events of Chernobyl and other nuclear reactor meltdowns such as the 2011 disaster in Fukushima, Japan, nuclear energy is actually one of the safest energy sources available to mankind, according to a 2012 Forbes article which ranked the mortality rate per kilowatt hour of energy from different sources. Perhaps unsurprisingly, coal tops the list, with a global average of 100,000 deaths per trillion kilowatt hour. Nuclear energy is at the bottom of the list with only about 0.1 deaths per trillion kilowatt hour, making it even safer by this metric than natural gas (4,000 deaths), hydro (1400 deaths), and wind (150 deaths). Modern nuclear reactors are built with passive redundant safety systems that help to avoid the disasters of their predecessors.

Dusty’s research helps to address one of the issues surrounding nuclear reactor safety: coolant material. Typical reactors use water as a coolant: water absorbs the heat from the reaction and it then turns to steam. Once water turns to steam at 100 degrees Celsius, the heat transfer is much less efficient – the workaround to this is putting the water under high pressure, which raises the boiling point. However, this comes with an increased safety risk and a manufacturing challenge: water under high pressure requires large, thick metal vessels to contain it.

Sodium, infamous for its role in the inorganic compound known as salt, is actually a metal. In its liquid phase, it is much like mercury: metallic and viscous. Liquid sodium can be used as a low-pressure, safer coolant that transfers heat efficiently and can keep a reactor core cool without requiring external power. The boiling point of liquid sodium is around 900 degrees Celsius, whereas a nuclear reactor operates in the range of around 300-500 degrees Celsius – meaning that reactors can operate within a much safer range of temperatures at atmospheric pressure as compared to reactors that use conventional water cooling systems.

Dusty’s research is helping to push the field of nuclear reactor efficiency and safety into the future. Nuclear energy promises a safer, greener solution to the energy crisis, providing a potent alternative to current fuel sources that generate greenhouse gas emissions. Nuclear energy utilized efficiently could even the capability to power the sequestration of carbon dioxide from the atmosphere, leading to a cleaner, greener future. 

Did we hook you on nuclear energy yet? Tune in to the show or catch the podcast to learn more about the history, present and future of this potent and promising energy source!  Be sure to listen live on Sunday January 30th at 7PM on 88.7FM or download the podcast if you missed it.

Water Woes of the West

Water resources in the western United States are at a turning point. Droughts are becoming more common and as temperatures rise due to climate change more water will be needed to sustain the current landscape. The ongoing issues in the Klamath River Basin, a watershed crossing southern Oregon and northern California, are a case-study of how the West will handle future water scarcity. Aside from the limited supply of water, deciding how to manage this dwindling resource is no easy feat.  Too much water has been promised to too many stakeholder groups, resulting in interpersonal conflict, distrust, and litigation. Our guest this week is Hannah Whitley, a PhD Candidate of Rural Sociology at Pennsylvania State University and a Visiting Scholar in the School of Public Policy at Oregon State University. Hannah grew up on a beef ranch in a small southwestern Oregon town, so she knows some of these issues all too well. Hannah is investigating how governance organizations work together to allocate water in the Upper Klamath Basin and how to tell the story of what water means to different stakeholder groups. By observing countless hours of public meetings, having one-on-one conversations with community members, and incorporating a novel research method called photovoice, she hopes to understand what can make water governance processes successful because the current situation is untenable for everyone involved.

Klamath Project Canal B looking southeast toward Merrill and Malin, Oregon. The Canal, which is typically full, moves water from Upper Klamath Lake to farms and ranchers who are part of the Klamath Project. The canal has been dry since October 2020. Taken September 2021.

How we got here

Prior to the 1800s-era Manifest Destiny movement, the area known today as the Upper Klamath Basin was solely inhabited by the Klamath Tribes (including the Klamath, Modoc, and Yahooskin-Paiute people). At the time, Upper Klamath Lake was at least four times its original size, and c’waam (Lost River suckers) and koptu (shortnose suckers) thrived in abundance. The 1864 Klamath Treaty, ratified in 1870, officially recognized the Klamath Tribes as sovereigns in the eyes of the federal government. Treaties are especially powerful arrangements with the federal government, akin to international agreements between nations. These agreements are generally considered to be permanent laws, or at least that’s what the tribes were told.

As part of the conditions of the Klamath Tribes Treaty, tribes retained hunting, fishing, and water rights on 1.5 million acres of land, but ceded control of 22 million acres to the federal government. Those expropriated lands were given to westward settlers who took advantage of 1862 Homestead Act.  The 1906 Reclamation Project drained much of Upper Klamath Lake, leaving behind soils that are nutrient dense and thus highly valuable. An additional homesteading program associated with the 1902 Reclamation Act prioritized the allocation of reclaimed federal land to veterans following World War I (there is ongoing litigation on whether these settlers have water rights as well, or just land rights). These land deeds have been passed onto families over time, though many mid-twentieth-century homesteaders opted to sell their land during the 1980 Farm Crisis.

The Klamath Tribes’ unceded lands were not contested during the intervening years. In the mid-1950s, however, the U.S. government used the 1954 Termination Act to nullify the Klamath Tribes’ 1864 Treaty. Although the Klamath tribes were “one of the strongest and wealthiest tribal nations in the US,” one result was the loss of the tribes’ remaining land and management rights. In 1986, their status as a federally-recognized tribe was restored, however, no land was returned. Soon after the Klamath Tribes were federally recognized (again), two species of fish that only spawn in the Upper Klamath Lake area were listed as endangered species. This provided both the c’waam and koptu fish species new legal protections, though they have always had significant cultural significance for the Klamath Tribes.

Sump 1B at Tule Lake National Wildlife Refuge is a 3,500-acre wetland and an important nesting, brood rearing, and molting area for a large number of waterfowl. Sump 1B has been dry since October 2020.

Where are we now

The Klamath River Basin is said to be one of the most complicated areas in the world due to the watershed’s transboundary location and the more than 60 different parties who have some interest in the Basin’s water allocation, including federal agencies, the states of California and Oregon, counties, irrigation districts, small farmers, large farmers, ranchers, and tribal communities. The Klamath Tribes play an active role in the management of water Basin-wide, although final governance decisions are made by state and federal agencies including the Bureau of Reclamation, Fish and Wildlife Service, and state departments of environmental quality.

Currently, the Upper Klamath Basin is occupied by multi-generation farmers and ranchers on lands that are exceedingly favorable for agricultural production. Some families have accumulated significant portions of land since the 1900s while others are still small-acreage farmers. As a result of farm consolidations that resulted from economic distress during the twentieth-century, many families  had overwhelming success in purchasing adjacent and nearby land parcels as they were sold over the last hundred years. The result is that these few well-resourced families have disproportionate control of the area’s agricultural and natural resources compared to smaller-scale farms. 

There are a variety of crops under production such as potatoes for Frito-Lay, Kettle Foods and In-N-Out Burger, as well as peppermint for European teas, and alfalfa used to feed cattle in China and the Willamette Valley. Regardless of the crop, as temperatures have risen and drought conditions worsen, Basin farmers and ranchers need more water each and every season. And it’s typically the more established farms who have a bigger say in how Upper Basin water (or lack thereof) and drought support programs are managed, regularly leaving smaller farms frustrated with decision-making processes. In addition to the seasonal droughts keeping lake levels low, stagnant water, summer sunshine, and nutrient runoff contribute to algae proliferation in the Upper Basin that decreases the survival rate of the endangered fish. Unfortunately, there is simply not enough water to continue with the status quo.

Near Tulelake, California. September 2021.

How are we moving forward

How do you balance all* of these competing interests through a collaborative governance model? (*We haven’t even mentioned the dams, or the downstream Yurok and Karuk tribes relying on water for salmon populations, the Ammon Bundy connection, or the State of Jefferson connection, read the multi-part series in The Herald for a deeper dive.) There needs to be a process where everyone is able to contribute and understand how these decisions will be made so they will be accepted in the future. Unfortunately, little research has been done in this area even though the need for new climate adaptation policies are increasingly in demand.

This is ongoing work that Hannah Whitley is conducting for her dissertation; how are stakeholders engaged in water governance? What are the different effects of these processes on factors like interpersonal trust, perceptions of power, and participation in state-led programs? The theory of the case is that if everyone’s voice is heard, and their concerns are addressed as best they can given limited resources, the final agreement may not completely satisfy all parties, but it’s an arrangement that is workable across all stakeholders. 

Hannah has been conducting field work since September that includes observing public meetings, interviewing stakeholders, and diving into archives. Hannah attended an in-person farm tour in September. During lunch, one Upper Basin stakeholder inquired about the feasibility of conducting a photovoice project similar to what Hannah did for her Masters thesis work with a group of women farmers and gardeners in Pittsburgh, Pennsylvania. The photovoice project allows individuals to tell their own stories through provided cameras with further input through collaborative focus groups. We will talk about this and so much more. Be sure to listen live on Sunday January 23rd at 7PM on 88.7FM or download the podcast if you missed it! Follow along with Hannah’s fieldwork on Instagram at @myrsocdissertation or visit her website.

This post was written by Adrian Gallo and edited by Hannah Whitely

Hannah Whitley completed her undergraduate degrees at Oregon State in 2017. Now a PhD Candidate at Penn State, Hannah will be a Visiting Scholar in the OSU School of Public Policy while she completes her dissertation fieldwork.