Author Archives: Bryan Lynn

Lean, Mean, Bioinformatics Machine

Machines take me by surprise with great frequency. – Alan Turing

This week we have a PhD student from the College of Engineering and advised by Dr. Maude David in Microbiology, Nima Azbijari, to discuss how he uses machine learning to better understand biology. Before we dig in to the research, let’s dig into what exactly machine learning is, and how it differs from artificial intelligence (AI). Both AI and machine learning learn patterns from data they are fed, but the difference is that AI is typically developed to be interacted with and make decisions in real time. If you’ve ever lost a game of chess to a computer, that was AI playing against you. But don’t worry, even the world’s champion at an even more complex game, Go, was beaten by AI. AI utilizes machine learning, but not all machine learning is AI. Kind of like how a square is a rectangle, but not all rectangles are squares. The goal of machine learning is to use data to improve at tasks using data it is fed.

So how exactly does a machine, one of the least biological things on this planet, help us understand biology? 

Ten years ago it was big news that a computer was able to recognize images of cats, but now photo recognition is quite common. Similarly, Nima uses machine learning with large sets of genomic (genes/DNA), proteomic (proteins), and even gut microbiomic data (symbiotic microbes in the digestive track) to then see if the computer can predict varying patient outcomes. By using computational power, larger data sets and the relationships between the varying kinds of data can be analyzed more quickly. This is great for both understanding the biological world in which we live, and also for the potential future of patient care. 

How exactly do you teach an old machine a new trick?

First, it’s important to note that he’s using a machine, not magic, and it can be massively time consuming (even for a computer) to do any kind of analysis on every element of a massive set. Potentially millions of computations, or even more. So to isolate only the data that matters, Nima uses graph neural networks to extrapolate the important pieces. Imagine if you had a data set about your home, and you counted both the number of windows and the number of blinds and found that they were the same. Then you might conclude that you only need to count windows, and that counting blinds doesn’t tell you anything new. The same idea works with reducing data into only the components that add meaning. 

The phrase ‘neural network’ can invoke imagery of a massive computer-brain made of wires, but what does this neural network look like, exactly? The 1999 movie The Matrix borrowed its name from a mathematical object which contains columns and rows of data, much like the iconic green columns of data from the movie posters. These matrices are useful for storing and computing data sets since they can be arranged much like an excel sheet, with columns for each patient and rows for each type of recorded data. He (or the computer?) can then work with that matrix to develop this neural network graph. Then, the neural network determines which data is relevant and can also illustrate connections between the different pieces of data. Much like how you might be connected to friends, coworkers, and family on a social network, except in this case, each profile is a compound or molecule and the connections can be any kind of relationship, such as a common reaction between the pair. However, unlike a social network, no one cares how many degrees from Kevin Bacon they are. The goal here isn’t to connect one molecule to another but to instead identify unknown relationships. Perhaps that makes it more like 23 and Me than Facebook.

TLDR

Nima is using machine learning to discover previously unknown relationships between various kinds of human biological data such as genes and the gut microbiome. Now, that’s a machine you don’t need to rage against.

Excited to learn more about machine learning?
Us too. Be sure to listen live on Sunday November 13th at 7PM on 88.7FM, or download the podcast if you missed it. And if you want to stay up to date on Nima’s research, you can follow them on Twitter.

I, Roboethicist

This week we have Colin Shea-Blymyer, a PhD student from OSU’s new AI program in the departments of Electrical Engineering and Computer Science, joining us to talk about coding computer ethics. Advancements in artificial intelligence (AI) are exploding, and while many of us are excited for a world where our Roomba’s evolve into Rosie’s (á la The Jetsons) – some of these technological advancements require grappling with ethical dilemmas. Determining how these AI technologies should make their decisions is a question that simply can’t be answered, and is best left to be debated by the spirits of John Stewart Mill and Immanual Kant. However, as a society, we are in dire need of a way to communicate ethics in a language that machines can understand – and this is exactly what Colin is developing.

Making An Impact: why coding computer ethics matters

A lot of AI is developed through machine learning – a process where software becomes more accurate without being explicitly told to do so. One example of this is through image recognition softwares. By feeding these algorithms with more and more photos of a cat – it will get better at recognizing what is and isn’t a cat. However, these algorithms are not perfect. How will the program treat a stuffed animal of a cat? How will it categorize the image of a cat on a t-shirt? When the stakes are low, like in image recognition, these errors may not matter as much. But for some technology being correct most of the time isn’t sufficient. We would simply not accept a pace-maker that operates correctly most of the time, or a plane that doesn’t crash into the mountains with just 95% certainty. Technologies that require a higher precision for safety also require a different approach to developing that software, and many applications of AI will require high safety standards – such as with self-driving cars or nursing robots. This means society is in need of a language to communicate with the AI in a way that it can understand ethics precisely, and with 100% accuracy. 
The Trolley Problem is a famous ethical dilemma that asks: if you are driving a trolley and see that it is going to hit and kill five pedestrians, but you could pull a lever to reroute the trolley to instead hit and kill one pedestrian – would you do it? While it seems obvious that we want our self-driving cars to not hit pedestrians, what is less obvious is what the car should do when it doesn’t have a choice but to hit and kill a pedestrian or to drive off a cliff killing the driver. Although Colin isn’t tackling the impossible feat of solving these ethical dilemmas, he is developing the language we need to communicate ethics to AI with the accuracy that we can’t achieve from machine learning. So who does decide how these robots will respond to ethical quandaries? While not part of Colin’s research, he believes this is best left answered by the communities the technologies will serve.

Colin doing a logical proof on a whiteboard with a 1/10 scale autonomous vehicle in the foreground.

The ArchIve: a (brief) history of AI

AI had its first wave in the 70’s, when it was thought that logic systems (a way of communicating directly with computers) would run AI. They also created perceptrons which try to mimic a neuron in a brain to put data into binary classes, but more importantly, has a very cool name. Perceptron! It sounds like a Spider-Man villain. However, logic and perceptrons turned out to not be particularly effective. There are a seemingly infinite number of possibilities and variables in the world, making it challenging to create a comprehensive code. Further, when AI has an incomprehensive code, it has the potential to enter a world it doesn’t know could even exist – and then it EXPLODES! Kind of. It enters a state known as the Principle of Explosion, where everything becomes true and chaos ensues. These challenges with using logic to develop AI led to the first “AI winter”. A highly relatable moment in history given the number of times I stop working and take a nap because a problem is too challenging. 

The second wave of AI blew up in the 80’s/90’s with the development of machine learning methods and in the mid-2000’s it really took off due to software that can handle matrix conversions rapidly. (And if that doesn’t mean anything to you, that’s okay. Just know that it basically means speedy complicated math could be achieved via computers). Additionally, high computational power means revisiting the first methods of the 70’s, and could string perceptrons together to form a neural network – moving from binary categorization to complex recognition.

A bIography: Colin’s road to coding computer ethics

During his undergrad at Virginia Tech studying computer science, Colin ran into an ArachnId that left him bitten by a philosophy bug. This led to one of many philosophical dilemmas he’d enjoy grappling with: whether to focus his studies on computer science or philosophy? And after reading I, Robot answered that question with a “yes”, finding a kindred spirit in the robopsychologist in the novel. This led to a future of combining computer science with philosophy and ethics: from his Master’s program where he weaved computer science into his philosophy lab’s research to his current project developing a language to communicate ethics to machines with his advisor Hassam Abbas. However, throughout his journey, Colin has become less of a robopsychologist and more of a roboethicist.

Want more information on coding computer ethics? Us too. Be sure to listen live on Sunday, April 17th at 7PM on 88.7FM, or download the podcast if you missed it. Want to stay up to date with the world of roboethics? Find more from Colin at https://web.engr.oregonstate.edu/~sheablyc/.

Colin Shea-Blymyer: PhD student of computer science and artificial intelligence at Oregon State University

This post was written by Bryan Lynn.

Mighty (a)morphin’ power metals

This week we have a PhD candidate from the materials science program, Jaskaran Saini, joining us to discuss his work on the development of novel metallic glasses. But first, what exactly is a metallic glass, you may ask? Metallic glasses are metals or alloys with an amorphous structure. They lack crystal lattices and crystal defects commonly found in standard crystalline metals. To form a metallic glass requires extremely high cooling rates. Well, how high? – a thousand to a million Kelvin per second! That high.

The idea here is that the speed of cooling impacts the atomic structure – and this idea is not new or limited to just metals! For example, the rocks granite, basalt, pumice, and obsidian all have a similar composition, but different cooling times. This even gives Obsidian an amorphous structure, which means we could probably just start referring to it as rocky glass. But the uses of metallic glass extend far beyond those of rocks.

(Left) Melting the raw materials inside the arc-melter to make the alloy. The bright light visible in the image is the plasma arc that goes up to 3500C. The ring that the arc is focusing on is the molten alloy.
(Right) Metallic glass sample as it comes out of the arc-melter; the arc melter can be seen in the background.
Close-ups of metallic glass buttons.

Why should we care about metallic glass? 

Metallic glasses are fundamentally cool, but in case that isn’t enough to peak your attention, they also have super powers that’d make Magneto drool. They have 2-3x the strength of steel, are incredibly elastic, have very high corrosion and wear resistance and have a mirror-like surface finish. So how can we apply these super metals to science? Well, NASA is already on it and is beginning to use metallic glasses as gear material for motors. While the Curiosity rover expends 30% of its energy and 3 hours heating and lubricating its steel gears to operate, Curiosity Jr. won’t have to worry about that with metallic glass gears. NASA isn’t the only one hopping onto the metallic glass train. Apple is trying to use these scratch proof materials in iPhones, the US Army is using high density hafnium-based metallic glasses for armor penetrating military applications, and some professional tennis and golf players have even used these materials in their rackets and golf clubs. But it took a long time to get these metallic glasses to the point where they’re now being used in rovers and tennis rackets.

Metallic glass: a history

Metallic glasses first appeared in the 1960’s when Jaskaran’s academic great grandfather (that is, his advisor’s advisor’s advisor), Pol Duwez, made them at Caltech. In order to achieve this special amorphous structure, a droplet of a gold-silicon alloy was cooled at a rate of over a million Kelvin per second with the end result being an approximately quarter sized foil of metallic glass, thinner than the thickness of a strand of hair. Fast forward to the ‘80’s, and researchers began producing larger metallic glasses. By the late ‘90’s and early 2000’s, the thickness of the biggest metallic glass produced had already exceeded 1000x the original foil thickness. However, with great size comes greater difficulty! If the metallic glass is too thick, it can’t cool fast enough to achieve an amorphous structure! Creating larger pieces of metallic glass has proven itself to be extremely challenging – and therefore is a great goal to pursue for graduate students and PI’s interested in taking on this challenge.

Currently, the largest pieces of metallic glasses are around 80 mm thick, however, they use and are based on precious metals such as palladium, silver, gold, platinum and beryllium. This makes them not very practical for multiple reasons. First, is the more obvious cost standpoint. Second, given the detrimental impact of mining rare-earth metals, efforts to minimize dependence on rare-earth metals can have a great positive impact on the environment. 

World records you probably didn’t know existed until now

As part of Prof. Donghua Xu’s lab, Jaskaran is working on developing large-sized metallic glasses from cheaper metals, such as copper, nickel, aluminum, zirconium and hafnium. It’s worth noting that although Jaskaran’s metallic glasses typically consist of at least three metal elements, his research is mainly focused on producing metallic glasses that are based on copper and hafnium (these two metals are in majority). Not only has Jaskaran been wildly successful in creating glassy alloys from these elements, but he has also set TWO WORLD RECORDS. The previous world record for a copper-based metallic glass was 25 mm, which he usurped with the creation of a 28.5 mm metallic glass. As for hafnium, the previous world record was 10 mm which Jaskaran almost doubled with a casting diameter of 18 mm. And mind you, these alloys do not contain any rare-earth or precious metals so they are cost-effective, have incredible properties and are completely benign to the environment!

The biggest copper-based metallic glass ever produced (world record sample).

Excited for more metallic glass content? Us too. Be sure to listen live on Sunday February 6th at 7PM on 88.7FM, or download the podcast if you missed it. Want to stay up to date with the world of metallic glass? Follow Jaskaran on Twitter, Instagram or Google Scholar. We also learned that he produces his own music, and listened to Sephora. You can find him on SoundCloud under his artist name, JSKRN.

Jaskaran Saini: PhD candidate from the materials science program at Oregon State University.

This post was written by Bryan Lynn and edited by Adrian Gallo and Jaskaran Saini.