Look out the window of a car driving along Oregon’s coastal highway 101 and you will see many roadside signs indicating the boundaries of the coastal tsunami zone. Natural phenomena can cause massive amounts of damage to the communities that happen to lie in their paths, the effects of which can last days, months, and even years beyond the actual event. In the wake of disaster, people are often unable to access necessary resources or are left stranded waiting for help. Community design and engineering don’t always take into account the personal lived needs of those within the community, therefore a human centered approach is needed to compliment quantitative disaster projections and more accurately predict the most effective plan for community recovery.
This is where engineering and social science collide. Our guest this week is Amina Meselhe, a 2nd year PhD student in the School of Civil and Construction Engineering. Amina’s current PhD project involves simulating the damage and recovery trajectories for the Cascadia Subduction Zone in the event of a tsunami. Her self-described research goal is to “create tangible outcomes for people to hold onto”. Her acute awareness for disaster preparedness comes from the fact that she grew up in Louisiana seeing the damage that hurricanes can cause and the lasting effects they have on people.
Tune into KBVR 88.7 FM at 7:00 pm PST on March 8th to hear Amina talk about what she is doing to improve the preparedness of communities along the Oregon coast, the research methods used by engineers to incorporate social science into their designs, and why “natural disaster” is a taboo term in the field.
Machines take me by surprise with great frequency. – Alan Turing
This week we have a PhD student from the College of Engineering and advised by Dr. Maude David in Microbiology, Nima Azbijari, to discuss how he uses machine learning to better understand biology. Before we dig in to the research, let’s dig into what exactly machine learning is, and how it differs from artificial intelligence (AI). Both AI and machine learning learn patterns from data they are fed, but the difference is that AI is typically developed to be interacted with and make decisions in real time. If you’ve ever lost a game of chess to a computer, that was AI playing against you. But don’t worry, even the world’s champion at an even more complex game, Go, was beaten by AI. AI utilizes machine learning, but not all machine learning is AI. Kind of like how a square is a rectangle, but not all rectangles are squares. The goal of machine learning is to use data to improve at tasks using data it is fed.
So how exactly does a machine, one of the least biological things on this planet, help us understand biology?
Ten years ago it was big news that a computer was able to recognize images of cats, but now photo recognition is quite common. Similarly, Nima uses machine learning with large sets of genomic (genes/DNA), proteomic (proteins), and even gut microbiomic data (symbiotic microbes in the digestive track) to then see if the computer can predict varying patient outcomes. By using computational power, larger data sets and the relationships between the varying kinds of data can be analyzed more quickly. This is great for both understanding the biological world in which we live, and also for the potential future of patient care.
How exactly do you teach an old machine a new trick?
First, it’s important to note that he’s using a machine, not magic, and it can be massively time consuming (even for a computer) to do any kind of analysis on every element of a massive set. Potentially millions of computations, or even more. So to isolate only the data that matters, Nima uses graph neural networks to extrapolate the important pieces. Imagine if you had a data set about your home, and you counted both the number of windows and the number of blinds and found that they were the same. Then you might conclude that you only need to count windows, and that counting blinds doesn’t tell you anything new. The same idea works with reducing data into only the components that add meaning.
The phrase ‘neural network’ can invoke imagery of a massive computer-brain made of wires, but what does this neural network look like, exactly? The 1999 movie The Matrix borrowed its name from a mathematical object which contains columns and rows of data, much like the iconic green columns of data from the movie posters. These matrices are useful for storing and computing data sets since they can be arranged much like an excel sheet, with columns for each patient and rows for each type of recorded data. He (or the computer?) can then work with that matrix to develop this neural network graph. Then, the neural network determines which data is relevant and can also illustrate connections between the different pieces of data. Much like how you might be connected to friends, coworkers, and family on a social network, except in this case, each profile is a compound or molecule and the connections can be any kind of relationship, such as a common reaction between the pair. However, unlike a social network, no one cares how many degrees from Kevin Bacon they are. The goal here isn’t to connect one molecule to another but to instead identify unknown relationships. Perhaps that makes it more like 23 and Me than Facebook.
TLDR
Nima is using machine learning to discover previously unknown relationships between various kinds of human biological data such as genes and the gut microbiome. Now, that’s a machine you don’t need to rage against.
Excited to learn more about machine learning? Us too. Be sure to listen live on Sunday November 13th at 7PM on 88.7FM, or download the podcast if you missed it. And if you want to stay up to date on Nima’s research, you can follow them on Twitter.
The overlap between environmental science and social justice are rare, but it has been around since at least the early 1990’s and is becoming more well-known today. The framework of Environmental Justice was popularized by Robert Bullard when his wife, a lawyer, asked him to help her with a case where he was mapping all the landfills in the state of Texas and cross reference the demographics of the people who lived there. Landfills are not the most pleasant places to live next to, especially if you never had the opportunity to choose otherwise. Bullard found that even though Houston has a 75% white population, every single city-owned landfill was built in predominantly black neighborhoods. The environmental hazards of landfills, their emissions and contaminated effluent, were systematically placed in communities that had been – and continue to be – disenfranchised citizens who lacked political power. Black people were forced to endure a disproportionate burden of the environmental hazards, and procedural justice was lacking in the decision making process that created these realities. Unfortunately, this is not a unique situation to Houston, or Texas, because this pattern continues today.
Environmental justice is an umbrella term that we cannot fully unpack in a blogpost or a single podcast, but it is fundamentally about the injustices of environmental hazards being forced upon disadvantaged communities who had little to no role in creating those hazards. This is not a United States-specific issue although we do focus on state-side issues in this episode. In fact, some of the most egregious examples occur in smaller and lesser known countries (see our episode with Michael Johnson, where his motivation for pursuing marine sciences in graduate school is because the islands of micronesia where he grew up are literally being submerged by the rising seas of global warming). The issues we discuss are multifaceted and can seem impossible to fix. But before we can fix the issues we need to really understand the socio-political-economic ecosystem that has placed us exactly where we are today.
To begin to discuss all of this, we have Chris Hughbanks who is a graduate student at Oregon State and one of the Vice Presidents of the local Linn-Benton NAACP branch and a member of their Environmental and Climate Justice committee (Disclaimer: Adrian is also a branch member and part of the committee). We begin the discussion with a flood in Chris’ hometown of Detroit. Chris describes how they never really had floods because when precipitation occurs it’s usually either not that much rain or cold enough for it to snow instead. Because it hardly rains that much, very few people have flood insurance. But that pesky climate change is making temperatures warmer and precipitation events more intense than ever before causing flooding to occur in 2014, 2016, 2019, and 2020. As you might guess, the effects of this natural disaster were not equally shared by all citizens of Detroit. We discuss the overlap between housing discrimination and flood areas, how the recovery effort left so many out to [not] dry.
We end the episode with ways to get involved at the local level. First, consider learning more about the Linn-Benton NAACP branch, and the initiatives they focus on to empower local communities. Vote, vote, vote, and vote. Make sure you’re registered, and everyone else you know is registered to vote. And recognize these problems are generations in the making, and it will take just as long to fully rectify them. Finally, I am reminded of an episode interviewing millennial writers about what it means to be born when global warming was a niche research topic, but to come of age when climate change has become a global catastrophe. They rightfully point out that there are a myriad of possibilities for human salvation and sacrifice for every tenth of a degree between 1.5 and 3.0°C of warming that is predicted by the most recent 6th edition of the IPCC report. As grim as our future seems, what an awesome task for our generations to embark upon to try and “create a polity and economy that actually treats everybody with dignity, I cannot think of a more meaningful way to spend a human life.”
If you missed the show, you can listen to this episode on the podcast feed!
Additional Reading & Podcast Notes
The Detroit Flood – We mentioned the NPR article reporting that 40% of people living in Detroit experienced flooding, how black neighborhoods were at higher risk to flooding, and that renters (who are disproportionately black) were nearly twice as likely to experience flooding compared to those who owned their homes. We also mentioned a map of Detroit, showing which areas are more at risk of flooding. Another local article described how abnormal that summer in Detroit and the surrounding areas were compared to other years.
We listed a number of Environmental Justice links that include:
Dumping in Dixie, the 1990 book written by Robert Bullard which is considered essential reading for many law school courses on environmental justice.
We listed the organizing principles of the modern environmental justice movement, first codified in 1991 at the First National People of Color Environmental Leadership Summit
A story near Los Angeles where mixed-use city zoning laws allowed industrial businesses to operate near residential areas, causing soil lead pollution that was unknown until Yvette Cabrera wrote her own grant to study the issue. Read her story in Grist: Ghost of Polluter’s Past that describes the immense efforts she and researchers had to go through to map soil lead contamination, and how the community has used that information to generate positive change for the community.
Environmental [in]justice afflicts the global south as well, where a majority of forest loss since the 1960’s has occurred in the tropical regions of the world.
Adrian mentioned a number of podcasts for further listening:
Two past Inspiration Dissemination episodes with Holly Horan on maternal infant stress in Puerto Rico and her experience conducting research after Hurricane Maria, and Michael Johnson who one of his motivation to go to graduate school was because where he grew up – Micronesia – has been feeling the rising seas of climate change long before other countries.
A deep investigative journalism podcast calledFloodlines about the events leading up to Hurricane Katrina in 2005 and what happened after (or, what should have happened).
If all this hurricane and flooding talk has got you down, consider that heat kills more people in the US than floods, hurricanes, or tornadoes according to the National Weather Service.
We also discussed the 2021 heat dome in the Pacific Northwest. This led to Oregon passing some of the strongest protections for heat for farmworkers (and others working outside). Consider reading a summary of wildfire effects on outdoor workers, and a new proposal in Oregon to pay farmworkers overtime (this proposal was recently passed in March of 2022). Related to farmworkers, Adrian mentioned the 2013 Southern Poverty Law Center’s analysis of guest visa worker programs titled Close to Slavery: Guestworker programs in the United States.
We returned to the fact that housing is central to so many injustices for generations. The Color of Law: A forgotten history of how our government segregated America by Richard Rothstein is a historical analysis of the laws and policies that shaped today’s housing patterns. One example Rothstein often cites is the construction of freeways purposefully routed through black communities; recently one developer accidentally said the quiet part out loud in explaining where a gas pipeline was routed because they choose “the path of least resistance“. We also mentioned that in 2019 and in 2020, Corvallis has ~37% of its residents being rent burdened (meaning households spend more than 50% of their income on rent), which is the worst city in the state over both years. You can also read about a California Delta assessment that focuses on agricultural shifts in the region due to land erosion and flooding, but they mention how current flood risk is tied to historical redlining.
Our climate in the next thirty years will not look the same as today, and that’s exactly why our energy systems will also soon look completely different. Energy systems are the big umbrella of how and where we create electricity, how we transport that electricity, and how we use electricity. We’re discussing the past and the future of our energy environment with Emily Richardson, a Masters of Engineering student in the Energy Systems Program.
Emily Richardson preparing for some good trouble
When our energy infrastructure was originally built, energy generation, transport, and usage was a one-way street. Utility companies made or acquired the electricity, built poles and wires to transport that electricity to then be used in homes and businesses. Although that infrastructure was only made to last 50 years, many are pushing 100 years of operation.
“If it ain’t broke, don’t fix it” some might say, but we’re not living in the same energy reality when the infrastructure was originally built. For in-depth visuals of our energy generation and usage, we recommend viewing Lawrence Livermore National Labs. Now we have a different energy portfolio (e.g. wind and solar) but there’s also a two-way street of electricity movement that is required. Rooftop solar helps power individual homes, but when zero to little energy is being used in-house and it’s sunny outside, that excess energy generation on your rooftop moves back upstream and can fulfill energy needs in other places. A two-way street is quickly being paved. It’s worth remembering that energy is on demand, meaning we only make exactly as much energy as what’s being used. If there is excess generation in a highly distributed way (i.e. home solar panels) it adds another level of complexity to our energy systems because there is no “overflow” valve for electricity.
Imagine if your toilet, that slowly moves water in one direction, was suddenly expected to move water in the other direction and back and forth as quick as the speed of light? Yikes indeed. City-wide plumbing infrastructure was bult to accommodate the most extreme events like the Super Bowl flush (when everyone in the city/state/country runs to the bathroom at halftime). While it’s an extreme circumstance, the infrastructure was built to prepare for it, and it works! But our energy systems were hardly made for this kind of reverse movement of energy, especially on a large scale as more people install rooftop solar.
Beyond the two-way street, there’s also rush hour to worry about. The UK is known for their tea; at a specific time after a popular TV show ends about one-million teakettles get turned on simultaneously. Without planning and foresight this would lead to an electricity shortage and people losing power. But the UK government imports 200-600 megawatts of energy, sometimes coming from a hydroelectric dam and/or nuclear energy, to accommodate their hot tea requirements. It’s surprisingly complicated to move this much power all at once, but with strategic planning there are solutions!
Everything in the energy world is physically connected. Even if the poles and wires and outlets are hidden behind walls there’s an immense amount of planning and design that you will never see because if infrastructure is working well, you can accidently forget its existence. When it fails, it can fail catastrophically. The 2020 Holiday Farm Fire in Oregon was initiated by downed powerlines, and the 2018 Paradise Fire in California was also initiated by malfunctioning powerlines. There are a multitude of reasons why those fires were especially damaging (location of ignition, exceptionally dry fuels, extreme wind events, drought and insect stressed trees, too many trees per acre, etc.), and why wildfires will get worse in the future (rising temperatures and changing precipitation patterns).
But our collective future requires energy, a lot of it, to be efficiently distributed and stored that requires a radical shift in our hardware, software, and maybe even our philosophy of energy usage. You don’t want to miss the discussion with Emily who will give us the deep dive on how we arrived at our energy reality and what our energy future will need to look like. This conversation is happening at 7pm on KBVR 88.7 FM, but you can also listen via the podcast feed.
Emily Richardson preparing for some adventures on the kayak
Additional Notes On air we mentioned a few resources that can provide more deep dives! The first is the Energy Gang Podcast that focuses on energy, clean technology, and the environment. The Big Switch Podcast is a five-part series on how the power grid works and how upcoming changes to the gird can help society. The Volts Podcast is an interview based show untangling our messy climate future and hopeful energy transitions. Emily mentioned a presentation titled Imagining a Zero Emissions Energy System.
When you think about artificial intelligence or robots in the everyday household, your first thought might be that it sounds like science fiction – like something out of the 1999 cult classic film “Smart House”. But it’s likely you have some of this technology in your home already – if you own a Google Home, Amazon Alexa, Roomba, smart watch, or even just a smartphone, you’re already plugged into this network of AI in the home. The use of this technology can pose great benefits to its users, spanning from simply asking Google to set an alarm to wake you up the next day, to wearable smart devices that can collect health data such as heart rate. AI is also currently being used to improve assistive technology, or technology that is used to improve the lives of disabled or elderly individuals. However, the rapid explosion in development and popularity of this tech also brings risks to consumers: there isn’t great legislation yet about the privacy of, say, healthcare data collected by such devices. Further, as we discussed with another guest a few weeks ago, there is the issue of coding ethics into AI – how can we as humans program robots in such a way that they learn to operate in an ethical manner? Who defines what that is? And on the human side – how do we ensure that human users of such technology can actually trust them, especially if they will be used in a way that could benefit the user’s health and wellness?
Anna Nickelson, a fourth-year PhD student in Kagan Tumer’s lab in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute in the Department of Mechanical, Industrial and Manufacturing Engineering, joins us this week to discuss her research, which touches on several of these aspects regarding the use of technology as part of healthcare. Also a former Brookings Institute intern, Anna incorporates not just coding of robots but far-reaching policy and legislation goals into her work. Her research is driven by a very high level goal: how do we create AI that benefits humans and humanity?
Anna Nickelson, fourth year PhD student in the Collaborative Robotics and Intelligent Systems Institute.
AI for social good
When we think about how to create technology that is beneficial, Anna says that there are four major considerations in play. First is the creation of the technology itself – the hardware, the software; how technology is coded, how it’s built. The second is technologists and the technology industry – how do we think about and create technologies beyond the capitalist mindset of what will make the most money? Third is considering the general public’s role: what is the best way to educate people about things like privacy, the limitations and benefits of AI, and how to protect themselves from harm? Finally, she says we must also consider policy and legislation surrounding beneficial tech at all levels, from local ordinances to international guidelines.
Anna’s current research with Dr. Tumer is funded by the NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING), an institute through the National Science Foundation that focuses on “personalized, longitudinal, collaborative AI, enabling the development of AI systems that learn personalized models of user behavior…and integrate that knowledge to support people and AIs working together”, as per their website. The institute is a collaboration between five universities, including Oregon State University and OHSU. What this looks like for Anna is lots of code writing and simulations studying how AI systems make trade-offs between different objectives.For this she looks at machine learning for decision making, and how multiple robots or AIs can work together towards a specific task without necessarily having to communicate with each other directly. For this she looks at machine learning for decision making in robots, and how multiple robots or AIs can work together towards a specific task without necessarily having to communicate with each other directly. Each robot or AI may have different considerations that factor into how they accomplish their objective, so part of her goal is to develop a framework for the different individuals to make decisions as part of a group.
With an undergraduate degree in math, a background in project management in the tech industry, engineering and coding skills, and experience working with a think tank in DC on tech-related policy, Anna is uniquely situated to address the major questions about development technology for social good in a way that mitigates risk. She came to graduate school at Oregon State with this interdisciplinary goal in mind. Her personal life goal is to get experience in each sector so she can bring in a wide range of perspectives and ideas. “There are quite a few people working on tech policy right now, but very few people have the breadth of perspective on it from the low level to the high level,” she says.
If you are interested in hearing more about Anna’s life goals and the intersection of artificial intelligence, healthcare, and policy, join us live at 7 PM on Sunday, May 7th on https://kbvrfm.orangemedianetwork.com/, or after the show wherever you find your podcasts.
Basic biology and computer science is probably not an intuitive pairing to think of, when we think of pairs of scientific disciplines. Not as intuitive as say biology and chemistry (often referred to as biochem). However, for Joseph Valencia, a third year PhD student at OSU, the bridge between these two disciplines is a view of life at the molecular scale as a computational process in which cells store, transmit, and interpret the information necessary for survival.
Think back to your 9th or 10th grade biology class content and you will (probably? maybe?) vaguely remember learning about DNA, RNA, proteins, and ribosomes, and much more. In case your memory is a little foggy, here is a short (and very simplified) recap of the basic biology. DNA is the information storage component of cells. RNA, which is the focus of Joseph’s research, is the messenger that carries information from DNA to control the synthesis of proteins. This process is called translation and ribosomes are required to carry out this process. Ribosomes are complex molecular machines and many of them can also be found in each of our cells. Their job is to interpret the RNA. The way this works is that they attach themselves to the RNA, they take the transcript of information that the RNA contains, interpret it and produce a protein. The proteins fold into a specific 3D shape and the shape determines the protein’s function. What do proteins do? Basically control everything in our bodies! Proteins make enzymes which control everything from muscle repair to eye twitching. The amazing thing about this process is that it is not specific to humans, but is a fundamental part of basic biology that occurs in basically every living thing!
An open reading frame (ORF) is a stretch of nucleotides beginning with a start codon and ending with a stop codon. Ribosomes bind to RNA transcripts and translate certain ORFs into proteins. The Kozak sequence (bottom right, from Wikipedia) depicts the nucleotides that commonly occur around the start codons of translated ORFs.
So now that you are refreshed on your high school biology, let us tie all of these ‘basics’ to what Joseph does for his research. Joseph’s research focuses on RNA, which can be broken down into two main groups: messenger RNA (mRNA) and non-coding RNA. mRNA is what ends up turning into a protein following the translation by a ribosome, whereas with long non-coding RNA, the ribosome decides not to turn it into a protein. While we are able to distinguish between the two types of RNA, we do not fully understand how a ribosome decides to turn one RNA (aka mRNA) into a protein, and not another (aka long non-coding RNA). That’s where Joseph and computer science come in – Joseph is building a machine learning model to try and better understand this ribosomal decision-making process.
Machine learning, a field within artificial intelligence, can be defined as any approach that creates an algorithm or model by using data rather than programmer specified rules. Lots of data. Modern machine learning models tend to keep learning and improving when more data is fed to them. While there are many different types of machine-learning approaches, Joseph is interested in one called natural language processing . You are probably pretty familiar with an example of natural language processing at work – Google Translate! The model that Joseph is building is in fact not too dissimilar from Google Translate, or at least the idea behind it; except that instead of taking English and translating it into Spanish, Joseph’s model is taking RNA and translating (or not translating) it into a protein. In Joseph’s own words, “We’re going through this whole rigamarole [aka his PhD] to understand how the ins [RNA & ribosomes] create the outs [proteins].”.
A high-level diagram of Joseph’s deep learning model architecture.
But it is not as easy as it sounds. There are a lot of complexities to the work because the thing that makes machine learning so powerful is that the exact complexities that gives these models the power that they have, also makes it hard to interpret why the model is doing what it is doing. Even a highly performing machine learning model may not capture the exact biological rules that govern translation, but successfully interpreting its learned patterns can help in formulating testable hypotheses about this fundamental life process.
To hear more about how Joseph is building this model, how it is going, and what brought him to OSU, listen to the podcast episode! Also, you can check out Joseph’s personal website to learn more about him & his work!
Our guest this week, Dr. Ari Foley, is a recent (July 2021) OSU graduate from the School of Nuclear Science and Engineering. For her PhD research, she developed a rapid imaging method for post-detonation nuclear forensics. While methods to do this work already exist, a lot of them are time- and material-intensive. Therefore, the goal of Ari’s work was to develop a method that could inform optimized destructive analysis of samples after a detonation event of a nuclear weapon, with a particular focus on reducing the amount of imaging time required. Not only was Ari able to accomplish this task but the system she developed is able to take an image of the spatial distribution of radiation omitted from an object in the same exposure as taking a traditional photograph of the object being analyzed (see Image below). How in the world did Ari do this? Read below for a short synopsis or even better listen to the episode here!
A core component of Ari’s system was an electron-magnifying charged couple device, also known as an EMCCD. The CCD part of that is essentially a normal camera but the EM part magnifies the signal collected from whatever the camera is pointed at. Ari rigged an inorganic scintillation crystal to the EMCCD, which sits in a 3D-printed holder just in front of the camera. The purpose of the crystal is that once it is held in close proximity to radioactive fallout material from a detonation, the radiation interacts with the crystal, which leads to the emission of light. This light is proportional to the amount of energy that is imparted within the crystal. The EM part of the EMCCD kicks in as the image is taken as it allows for a high intensity image to be made that magnifies the light emitted from the crystal interacting with the radiation. This process needs to occur in light tight box, however it is mobile, meaning that it can easily be taken into the field and directly be used at a nuclear detonation site to measure the intensity of radiation of fallout material.
Ari spent the last three years of her PhD time in Idaho at the Idaho National Laboratory (INL), which is one of the leading nuclear research labs in the USA and has close ties with OSU. In fact, Ari was one of two students in the inaugural class of INL Graduate Fellows, which enabled her to conduct this work while working full-time at the lab. However, Ari’s career may have gone down a very different path because she had always wanted to be an Arts student or pursue a career in human rights. However, during a summer school experience during her high school years, Ari attended a class on Indigenous Peoples and the United Nations. During this class, the students took a trip to the United Nations General Assembly Building in New York, which hosts a statue from Hiroshima, Nagasaki. The statue is of a woman holding a lamb, which from the front, looks completely normal. However, when you walk around to the back of the statue, the statue is completed charred and scarred – a consequence of the atomic bomb. The same class presented case studies of radiation contamination on tribal reservations in the USA. Seeing and learning these things really riled Ari up at the time because while she had been interested by radiation in chemistry class, she was suddenly confronted by the fact that radiation contamination were actual ongoing world issues.
Listen to the podcast episode here to learn more about the nitty-gritty of how Ari developed her nuclear forensic system, how she prevented from getting radiation in the lab, and her road to OSU!
This week we have a PhD candidate from the materials science program, Jaskaran Saini, joining us to discuss his work on the development of novel metallic glasses. But first, what exactly is a metallic glass, you may ask? Metallic glasses are metals or alloys with an amorphous structure. They lack crystal lattices and crystal defects commonly found in standard crystalline metals. To form a metallic glass requires extremely high cooling rates. Well, how high? – a thousand to a million Kelvin per second! That high.
The idea here is that the speed of cooling impacts the atomic structure – and this idea is not new or limited to just metals! For example, the rocks granite, basalt, pumice, and obsidian all have a similar composition, but different cooling times. This even gives Obsidian an amorphous structure, which means we could probably just start referring to it as rocky glass. But the uses of metallic glass extend far beyond those of rocks.
(Left) Melting the raw materials inside the arc-melter to make the alloy. The bright light visible in the image is the plasma arc that goes up to 3500C. The ring that the arc is focusing on is the molten alloy. (Right) Metallic glass sample as it comes out of the arc-melter; the arc melter can be seen in the background.
Close-ups of metallic glass buttons.
Why should we care about metallic glass?
Metallic glasses are fundamentally cool, but in case that isn’t enough to peak your attention, they also have super powers that’d make Magneto drool. They have 2-3x the strength of steel, are incredibly elastic, have very high corrosion and wear resistance and have a mirror-like surface finish. So how can we apply these super metals to science? Well, NASA is already on it and is beginning to use metallic glasses as gear material for motors. While the Curiosity rover expends 30% of its energy and 3 hours heating and lubricating its steel gears to operate, Curiosity Jr. won’t have to worry about that with metallic glass gears. NASA isn’t the only one hopping onto the metallic glass train. Apple is trying to use these scratch proof materials in iPhones, the US Army is using high density hafnium-based metallic glasses for armor penetrating military applications, and some professional tennis and golf players have even used these materials in their rackets and golf clubs. But it took a long time to get these metallic glasses to the point where they’re now being used in rovers and tennis rackets.
Metallic glass: a history
Metallic glasses first appeared in the 1960’s when Jaskaran’s academic great grandfather (that is, his advisor’s advisor’s advisor), Pol Duwez, made them at Caltech. In order to achieve this special amorphous structure, a droplet of a gold-silicon alloy was cooled at a rate of over a million Kelvin per second with the end result being an approximately quarter sized foil of metallic glass, thinner than the thickness of a strand of hair. Fast forward to the ‘80’s, and researchers began producing larger metallic glasses. By the late ‘90’s and early 2000’s, the thickness of the biggest metallic glass produced had already exceeded 1000x the original foil thickness. However, with great size comes greater difficulty! If the metallic glass is too thick, it can’t cool fast enough to achieve an amorphous structure! Creating larger pieces of metallic glass has proven itself to be extremely challenging – and therefore is a great goal to pursue for graduate students and PI’s interested in taking on this challenge.
Currently, the largest pieces of metallic glasses are around 80 mm thick, however, they use and are based on precious metals such as palladium, silver, gold, platinum and beryllium. This makes them not very practical for multiple reasons. First, is the more obvious cost standpoint. Second, given the detrimental impact of mining rare-earth metals, efforts to minimize dependence on rare-earth metals can have a great positive impact on the environment.
World records you probably didn’t know existed until now
As part of Prof. Donghua Xu’s lab, Jaskaran is working on developing large-sized metallic glasses from cheaper metals, such as copper, nickel, aluminum, zirconium and hafnium. It’s worth noting that although Jaskaran’s metallic glasses typically consist of at least three metal elements, his research is mainly focused on producing metallic glasses that are based on copper and hafnium (these two metals are in majority). Not only has Jaskaran been wildly successful in creating glassy alloys from these elements, but he has also set TWO WORLD RECORDS. The previous world record for a copper-based metallic glass was 25 mm, which he usurped with the creation of a 28.5 mm metallic glass. As for hafnium, the previous world record was 10 mm which Jaskaran almost doubled with a casting diameter of 18 mm. And mind you, these alloys do not contain any rare-earth or precious metals so they are cost-effective, have incredible properties and are completely benign to the environment!
The biggest copper-based metallic glass ever produced (world record sample).
Excited for more metallic glass content? Us too. Be sure to listen live on Sunday February 6th at 7PM on 88.7FM, or download the podcast if you missed it. Want to stay up to date with the world of metallic glass? Follow Jaskaran on Twitter, Instagram or Google Scholar. We also learned that he produces his own music, and listened to Sephora. You can find him on SoundCloud under his artist name, JSKRN.
Jaskaran Saini: PhD candidate from the materials science program at Oregon State University.
This post was written by Bryan Lynn and edited by Adrian Gallo and Jaskaran Saini.
This week we have on the show Dr. Bo Wu – he recently graduated from Oregon State University with a Ph.D. from the Electrical Engineering department where he developed new sensors to monitor three different neurotransmitters that are correlated with our stress, mood, and happiness. Even though so much of our bodily functions rely on these neurotransmitters (cortisol, serotonin, dopamine), there are no current commercial or rapid techniques to monitor these tiny molecules. Since the majority of innovations in University settings never gets beyond the walls of the Ivory Tower, Bo wanted to design sensors with functionality and scalability in mind. Those basic principles are why Bo was attracted to joining the lab of Dr. Larry Cheng; instead of innovations sitting on university shelves their innovations must be designed to bring to market. Using nano-fabrications technology, Bo developed sensors that are about the size of a thumbnail to provide rapid and accurate measures of different neurotransmitters to be used outside the hospital setting. The promise of having these mini-molecules be measured as a point of care diagnostic (i.e. measured by the patient) is an exciting advancement in the medical field.
This innovation is not the only one coming from Bo; with the help of a colleague, they designed a product for researchers to easily reformat academic research papers for submission to other journals. If you didn’t know, submitting manuscripts to different journals takes an immense amount of time because of the formatting changes required. But these are tedious and can take a week or longer that can be used for crucial research experiments. While this service was originally designed for Engineering publications, the COVID-19 pandemic showed them there was a greater and more immediate need. With so many people losing their jobs, they re-designed the software to help people create and re-imagine their resumes for job applications. Their website, WiseDoc.net is now geared toward helping job seekers build stronger resumes, but Bo and his team expects to return to the original idea of re-formatting papers for academic publications but will expand to those beyond just Engineering journals. Thanks to Oregon State’s Advantage Accelerator Program, Bo and his co-founder were able to refine their product and acquire seed money to get the website off the ground, which now employs a small international team to maintain and improve its services. If you have questions for Bo about starting your own business, being an international student, or the Advantage Accelerator program, you can contact him by email wubo[at]oregonstate[dot]edu.
Did you miss the show on Sunday, you can listen to Bo’s episode on Apple Podcasts!
Hospitals can provide a wide variety of lab tests to better understand our ailments. But have you ever wondered what happens to the sample after it’s in your doctor’s test-tube but before you get results? The answer is usually complicated and slow lab work; requiring lots of individual little steps to isolate and measure some specific molecule in your body. (Think of PCR-based COVID-19 tests). But not all tests require lab work.
You’re probably familiar with some paper-based diagnostic tools like checking the chlorine or pH level of your swimming pool. These are “dipsticks” of special papers and suitable for large volume samples. But what if you only have a couple drops to spare? For example, a diabetic is usually monitoring their blood’s glucose molecules with only a few drops of blood on special paper, then adding that paper to a measuring device. But you still need that small electronic device to know your blood glucose levels! This device requirement makes testing and diagnosis less accessible to people around the world. What if you could make a paper-based diagnostic tool, that works with tiny volumes, but doesn’t need any other equipment, or fancy software, or a trip to the hospital to get your answer? This is exactly why researchers are excited about paper-based microfluidic devices.
Pregnancy tests are one of the best examples (See Figure 2.4) of how researchers have automated a complex laboratory test onto a single device someone can purchase from any local pharmacy, at a relatively low cost, to get an answer within minutes, inside their own home. These tests actually measure a specific hormone, but it’s presented as a color indicator. Inside the device is porous media, to help move the sample, and a few different reagents in a specific order that generate the chemical reactions so you can see your test result as an easy to interpret color. No extra fancy machines, no hospital visit, rapid results, and relatively affordable disposable devices make pregnancy tests a success story. But this was commercialized in 1988, and urine samples are generally thought to be larger volume samples. There are still many more potential uses of paper-based diagnostic tools, using small-volume blood samples, yet to be developed.
This evening we have Lael Wentland, a PhD candidate in the College of Engineering, who is discussing her ongoing research on developing paper-based microfluidic tests for rare diseases. A central pillar of her work is to make healthcare more sustainable and accessible for a greater number of people, but especially those in more remote settings. The World Health Organization has an ASSURED criteria for the development of more paper based diagnostics to help guide researchers. The ASSURED criteria principles require the device be: Affordable, Sensitive, Specific, User friendly, Rapid and Robust, Equipment free and Deliverable to end users.
Using this framework, Lael has already developed one tool to monitor a metabolic disorder, and continues to work on another rare biomolecule. She started her research at OSU on phenylketonuria, a metabolic disorder where your body cannot breakdown a key amino acid (phenylalanine) found in foods. If you get too little of this amino acid, your body can’t make all the proteins it needs for growth, repair, or maintenance. Too much of this amino acid can cause seizures and developmental delays. Keeping close tabs on this phenylalanine is needed for people with this disorder because you can alter your diet to suit your body and remain healthy. But the current tests to monitor this amino acid is not as readily available as one may need. This is why Lael worked to make a paper-based microfluidic device that would adhere to the ASSURED criteria to make this more accessible for anyone. Lael was way past the proof-of-concept stage of her device, and was already recruiting subjects to test their blood using her new device when COVID-19 become prominent in March 2020. That’s one reason she pivoted to monitoring another rare disorder using similar principles.
We’ll get into that, and so much more, Sunday 7pm on 88.7FM KBVR.
Did you miss the show Sunday night? You can listen to Lael’s episode on Apple Podcasts!