Category Archives: Artificial Intelligence

Global swarming: getting robot swarms to perform intelligently

This week we have a robotics PhD student, Everardo Gonzalez, joining us to discuss his research on coordinating robots with artificial intelligence (AI). That doesn’t mean he dresses them up in matching bow ties (sadly), but instead he works on how to get a large collective of robots, also called a swarm, to work collectively towards a shared goal. 

Why should we care about swarming robots? 

Aside from the potential for an apocalyptic robot world domination, there are actually many applications for this technology. Some are just as terrifying. It could be applied to fully automated warfare – reducing accountability when no one is to blame for pulling the trigger (literally).

However, it could also be used to coordinate robots used in healthcare and with organizing fleets of autonomous vehicles, potentially making our lives, and our streets, safer. In the case of the fish-inspired Blue Bots, this kind of coordinated robot system can also help us gather information about our oceans as we try to resolve climate change.

Depiction of how the fish-inspired Blue Bots can observe their surroundings in a shared aquatic space, then send that information and receive feedback from the computer system. Driving the Blue Bots’ behavior is a network model, as depicted in the Agent A square.

#Influencer

Having a group of intelligent robots behaving intelligently sounds like it’s a problem of quantity, however, it’s not that simple. These bots can also suffer from there being “too many cooks in the kitchen”, and, if all bots in the swarm are intelligent, they can start to hinder each other’s progress. Instead, the swarm needs both a few leader bots, that are intelligent and capable of learning and trying new things, along with follower bots, which can learn from their leader. Essentially, the bots play a game of “Follow the Leaders”.

All robots receive feedback with respect to a shared objective, which is typical of AI training and allow the bots to infer which behaviors are effective. In this case, the leaders will get additional feedback on how well they are influencing their followers. 

Unlike social media, one influencer with too many followers is a bad thing – and the bots can become ineffective. There’s a famous social experiment in which actors in a busy New York City street stopped to stare at a window to determine if strangers would do the same. If there are not enough actors staring at the window, strangers are unlikely to respond. But as the number of actors increases, the likeness of a stranger stopping to look will also increase. The bot swarms also have an optimal number of leaders required to have the largest influence on their followers. Perhaps we’re much more like robots than the Turing test would have us believe. 

Dot to dot

We’re a long way from intelligent robot swarms, though, as Everardo is using simplified 2D particle simulations to begin to tackle this problem. In this case the particles replace the robots, and are essentially just dots (rodots?) in a shared environment that only has two dimensions. The objectives or points of interest for these dot bots are more dots! Despite these simplifications, translating system feedback into a performance review for the leaders is still a challenging problem to solve computationally. Everardo starts by asking the question “what if the leader had not been there”, but then you have to ask “what if the followers that followed that leader did something else?” and then you’ve opened a can of worms reminiscent of Smash Mouth where the “what if”’s start coming and they don’t stop coming.

Everardo Gonzalez

What if you wanted to know more about swarming robots? Be sure to listen live on Sunday February 26th at 7PM on 88.7FM, or download the podcast if you missed it. To learn a bit more about Everardo’s work with swarms and all things robotics, check out his portfolio at everardog.github.io

AI that benefits humans and humanity

When you think about artificial intelligence or robots in the everyday household, your first thought might be that it sounds like science fiction – like something out of the 1999 cult classic film “Smart House”. But it’s likely you have some of this technology in your home already – if you own a Google Home, Amazon Alexa, Roomba, smart watch, or even just a smartphone, you’re already plugged into this network of AI in the home. The use of this technology can pose great benefits to its users, spanning from simply asking Google to set an alarm to wake you up the next day, to wearable smart devices that can collect health data such as heart rate. AI is also currently being used to improve assistive technology, or technology that is used to improve the lives of disabled or elderly individuals. However, the rapid explosion in development and popularity of this tech also brings risks to consumers: there isn’t great legislation yet about the privacy of, say, healthcare data collected by such devices. Further, as we discussed with another guest a few weeks ago, there is the issue of coding ethics into AI – how can we as humans program robots in such a way that they learn to operate in an ethical manner? Who defines what that is? And on the human side – how do we ensure that human users of such technology can actually trust them, especially if they will be used in a way that could benefit the user’s health and wellness?

Anna Nickelson, a fourth-year PhD student in Kagan Tumer’s lab in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute in the Department of Mechanical, Industrial and Manufacturing Engineering, joins us this week to discuss her research, which touches on several of these aspects regarding the use of technology as part of healthcare. Also a former Brookings Institute intern, Anna incorporates not just coding of robots but far-reaching policy and legislation goals into her work. Her research is driven by a very high level goal: how do we create AI that benefits humans and humanity?

Anna Nickelson, fourth year PhD student in the Collaborative Robotics and Intelligent Systems Institute.

AI for social good

When we think about how to create technology that is beneficial, Anna says that there are four major considerations in play. First is the creation of the technology itself – the hardware, the software; how technology is coded, how it’s built. The second is technologists and the technology industry – how do we think about and create technologies beyond the capitalist mindset of what will make the most money? Third is considering the general public’s role: what is the best way to educate people about things like privacy, the limitations and benefits of AI, and how to protect themselves from harm? Finally, she says we must also consider policy and legislation surrounding beneficial tech at all levels, from local ordinances to international guidelines. 

Anna’s current research with Dr. Tumer is funded by the NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING), an institute through the National Science Foundation that focuses on “personalized, longitudinal, collaborative AI, enabling the development of AI systems that learn personalized models of user behavior…and integrate that knowledge to support people and AIs working together”, as per their website. The institute is a collaboration between five universities, including Oregon State University and OHSU. What this looks like for Anna is lots of code writing and simulations studying how AI systems make trade-offs between different objectives.For this she looks at machine learning for decision making, and how multiple robots or AIs can work together towards a specific task without necessarily having to communicate with each other directly. For this she looks at machine learning for decision making in robots, and how multiple robots or AIs can work together towards a specific task without necessarily having to communicate with each other directly. Each robot or AI may have different considerations that factor into how they accomplish their objective, so part of her goal is to develop a framework for the different individuals to make decisions as part of a group.

With an undergraduate degree in math, a background in project management in the tech industry, engineering and coding skills, and experience working with a think tank in DC on tech-related policy, Anna is uniquely situated to address the major questions about development technology for social good in a way that mitigates risk. She came to graduate school at Oregon State with this interdisciplinary goal in mind. Her personal life goal is to get experience in each sector so she can bring in a wide range of perspectives and ideas. “There are quite a few people working on tech policy right now, but very few people have the breadth of perspective on it from the low level to the high level,” she says. 

If you are interested in hearing more about Anna’s life goals and the intersection of artificial intelligence, healthcare, and policy, join us live at 7 PM on Sunday, May 7th on https://kbvrfm.orangemedianetwork.com/, or after the show wherever you find your podcasts. 

I, Roboethicist

This week we have Colin Shea-Blymyer, a PhD student from OSU’s new AI program in the departments of Electrical Engineering and Computer Science, joining us to talk about coding computer ethics. Advancements in artificial intelligence (AI) are exploding, and while many of us are excited for a world where our Roomba’s evolve into Rosie’s (á la The Jetsons) – some of these technological advancements require grappling with ethical dilemmas. Determining how these AI technologies should make their decisions is a question that simply can’t be answered, and is best left to be debated by the spirits of John Stewart Mill and Immanual Kant. However, as a society, we are in dire need of a way to communicate ethics in a language that machines can understand – and this is exactly what Colin is developing.

Making An Impact: why coding computer ethics matters

A lot of AI is developed through machine learning – a process where software becomes more accurate without being explicitly told to do so. One example of this is through image recognition softwares. By feeding these algorithms with more and more photos of a cat – it will get better at recognizing what is and isn’t a cat. However, these algorithms are not perfect. How will the program treat a stuffed animal of a cat? How will it categorize the image of a cat on a t-shirt? When the stakes are low, like in image recognition, these errors may not matter as much. But for some technology being correct most of the time isn’t sufficient. We would simply not accept a pace-maker that operates correctly most of the time, or a plane that doesn’t crash into the mountains with just 95% certainty. Technologies that require a higher precision for safety also require a different approach to developing that software, and many applications of AI will require high safety standards – such as with self-driving cars or nursing robots. This means society is in need of a language to communicate with the AI in a way that it can understand ethics precisely, and with 100% accuracy. 
The Trolley Problem is a famous ethical dilemma that asks: if you are driving a trolley and see that it is going to hit and kill five pedestrians, but you could pull a lever to reroute the trolley to instead hit and kill one pedestrian – would you do it? While it seems obvious that we want our self-driving cars to not hit pedestrians, what is less obvious is what the car should do when it doesn’t have a choice but to hit and kill a pedestrian or to drive off a cliff killing the driver. Although Colin isn’t tackling the impossible feat of solving these ethical dilemmas, he is developing the language we need to communicate ethics to AI with the accuracy that we can’t achieve from machine learning. So who does decide how these robots will respond to ethical quandaries? While not part of Colin’s research, he believes this is best left answered by the communities the technologies will serve.

Colin doing a logical proof on a whiteboard with a 1/10 scale autonomous vehicle in the foreground.

The ArchIve: a (brief) history of AI

AI had its first wave in the 70’s, when it was thought that logic systems (a way of communicating directly with computers) would run AI. They also created perceptrons which try to mimic a neuron in a brain to put data into binary classes, but more importantly, has a very cool name. Perceptron! It sounds like a Spider-Man villain. However, logic and perceptrons turned out to not be particularly effective. There are a seemingly infinite number of possibilities and variables in the world, making it challenging to create a comprehensive code. Further, when AI has an incomprehensive code, it has the potential to enter a world it doesn’t know could even exist – and then it EXPLODES! Kind of. It enters a state known as the Principle of Explosion, where everything becomes true and chaos ensues. These challenges with using logic to develop AI led to the first “AI winter”. A highly relatable moment in history given the number of times I stop working and take a nap because a problem is too challenging. 

The second wave of AI blew up in the 80’s/90’s with the development of machine learning methods and in the mid-2000’s it really took off due to software that can handle matrix conversions rapidly. (And if that doesn’t mean anything to you, that’s okay. Just know that it basically means speedy complicated math could be achieved via computers). Additionally, high computational power means revisiting the first methods of the 70’s, and could string perceptrons together to form a neural network – moving from binary categorization to complex recognition.

A bIography: Colin’s road to coding computer ethics

During his undergrad at Virginia Tech studying computer science, Colin ran into an ArachnId that left him bitten by a philosophy bug. This led to one of many philosophical dilemmas he’d enjoy grappling with: whether to focus his studies on computer science or philosophy? And after reading I, Robot answered that question with a “yes”, finding a kindred spirit in the robopsychologist in the novel. This led to a future of combining computer science with philosophy and ethics: from his Master’s program where he weaved computer science into his philosophy lab’s research to his current project developing a language to communicate ethics to machines with his advisor Hassam Abbas. However, throughout his journey, Colin has become less of a robopsychologist and more of a roboethicist.

Want more information on coding computer ethics? Us too. Be sure to listen live on Sunday, April 17th at 7PM on 88.7FM, or download the podcast if you missed it. Want to stay up to date with the world of roboethics? Find more from Colin at https://web.engr.oregonstate.edu/~sheablyc/.

Colin Shea-Blymyer: PhD student of computer science and artificial intelligence at Oregon State University

This post was written by Bryan Lynn.

The rigamarole of RNA, ribosomes, and machine learning

Basic biology and computer science is probably not an intuitive pairing to think of, when we think of pairs of scientific disciplines. Not as intuitive as say biology and chemistry (often referred to as biochem). However, for Joseph Valencia, a third year PhD student at OSU, the bridge between these two disciplines is a view of life at the molecular scale as a computational process in which cells store, transmit, and interpret the information necessary for survival. 

Think back to your 9th or 10th grade biology class content and you will (probably? maybe?) vaguely remember learning about DNA, RNA, proteins, and ribosomes, and much more. In case your memory is a little foggy, here is a short (and very simplified) recap of the basic biology. DNA is the information storage component of cells. RNA, which is the focus of Joseph’s research, is the messenger that carries information from DNA to control the synthesis of proteins. This process is called translation and ribosomes are required to carry out this process. Ribosomes are complex molecular machines and many of them can also be found in each of our cells. Their job is to interpret the RNA. The way this works is that they attach themselves to the RNA, they take the transcript of information that the RNA contains, interpret it and produce a protein. The proteins fold into a specific 3D shape and the shape determines the protein’s function. What do proteins do? Basically control everything in our bodies! Proteins make enzymes which control everything from muscle repair to eye twitching. The amazing thing about this process is that it is not specific to humans, but is a fundamental part of basic biology that occurs in basically every living thing!

An open reading frame (ORF) is a stretch of nucleotides beginning with a start codon and ending with a stop codon. Ribosomes bind to RNA transcripts and translate certain ORFs into proteins. The Kozak sequence (bottom right, from Wikipedia) depicts the nucleotides that commonly occur around the start codons of translated ORFs.

So now that you are refreshed on your high school biology, let us tie all of these ‘basics’ to what Joseph does for his research. Joseph’s research focuses on RNA, which can be broken down into two main groups: messenger  RNA (mRNA) and non-coding RNA. mRNA is what ends up turning into a protein following the translation by a ribosome, whereas with long non-coding RNA, the ribosome decides not to turn it into a protein. While we are able to distinguish between the two types of RNA, we do not  fully understand how a ribosome decides to turn one RNA (aka mRNA) into a protein, and not another (aka long non-coding RNA). That’s where Joseph and computer science come in – Joseph is building a machine learning model to try and better understand this ribosomal decision-making process.

Machine learning, a field within artificial intelligence, can be defined as any approach that creates an algorithm or model by using data rather than programmer specified rules. Lots of data. Modern machine learning models tend to  keep learning and improving when more data is fed to them. While there are many different types of machine-learning approaches, Joseph is interested in one called natural language processing . You are probably pretty familiar with an example of natural language processing at work – Google Translate! The model that Joseph is building is in fact not too dissimilar from Google Translate, or at least the idea behind it; except  that instead of taking English and translating it into Spanish, Joseph’s model is taking RNA and translating (or not translating) it into a protein. In Joseph’s own words, “We’re going through this whole rigamarole [aka his PhD] to understand how the ins [RNA & ribosomes] create the outs [proteins].”.

A high-level diagram of Joseph’s deep learning model architecture.

But it is not as easy as it sounds. There are a lot of complexities to the work because the thing that makes machine learning so powerful is that the exact complexities that gives these models the power that they have, also makes it hard to interpret why the model is doing what it is doing. Even a highly performing machine learning model may not capture the exact biological rules that govern translation, but successfully interpreting its learned patterns can help in formulating testable hypotheses about this fundamental life process.

To hear more about how Joseph is building this model, how it is going, and what brought him to OSU, listen to the podcast episode! Also, you can check out Joseph’s personal website to learn more about him & his work!

How many robots does it take to screw in a light bulb?

As technology continues to improve over the coming years, we are beginning to see increased integration of robotics into our daily lives. Imagine if these robots were capable of receiving general instructions regarding a task, and they were able to learn, work, and communicate as a team to complete that task with no additional guidance. Our guest this week on Inspiration Dissemination, Connor Yates a Robotics PhD student in the College of Engineering, studies artificial intelligence and machine learning and wants to make the above hypothetical scenario a reality. Connor and other members of the Autonomous Agents and Distributed Intelligence Laboratory are keenly interested in distributed reinforcement learning, optimization, and control in large complex robotics systems. Applications of this include multi-robot coordination, mobile robot navigation, transportation systems, and intelligent energy management.

Connor Yates.

A long time Beaver and native Oregonian, Connor grew up on the eastern side of the state. His father was a botanist, which naturally translated to a lot of time spent in the woods during his childhood. This, however, did not deter his aspirations of becoming a mechanical engineer building rockets for NASA. Fast forward to his first term of undergraduate here at Oregon State University—while taking his first mechanical engineering course, he realized rocket science wasn’t the academic field he wanted to pursue. After taking numerous different courses, one piqued his interest, computer science. He then went on to flourish in the computer science program eventually meeting his current Ph.D. advisor, Dr. Kagan Tumer. Connor worked with Dr. Tumer for two of his undergraduate years, and completed his undergraduate honors thesis investigating the improvement to gauge the intent of multiple robots working together in one system.

Connor taking in a view at Glacier National Park 2017.

Currently, Connor is working on improving the ability for machines to learn by implementing a reward system; think of a “good robot” and “bad robot” system. Using computer simulations, a robot can be assigned a general task. Robots usually begin learning a task with many failed attempts, but through the reward system, good behaviors can be enforced and behaviors that do not relate to the assigned task can be discouraged. Over thousands of trials, the robot eventually learns what to do and completes the task. Simple, right? However, this becomes incredibly more complex when a team of robots are assigned to learn a task. Connor focuses on rewarding not just successful completion an assigned task, but also progress toward completing the task. For example, say you have a table that requires six robots to move. When two robots attempt the task and fail, rather than just view it as a failed task, robots are capable of learning that two robots are not enough and recruit more robots until successful completion of the task. This is seen as a step wise progression toward success rather than an all or nothing type situation. It is Connor’s hope that one day in the future a robot team could not only complete a task but also report reasons why a decision was made to complete an assigned task.

In Connor’s free time he enjoys getting involved in the many PAC courses that are offered here at Oregon State University, getting outside, and trying to teach his household robot how to bring him a beer from the fridge.

Tune in to 88.7 FM at 7:00 PM Sunday evening to hear more about Connor and his research on artificial intelligence, or stream the program live.

Rise of the Robots

2404413

Image from: http://www.stuff.co.nz/technology/2403589/Artificial-intelligence-the-future-of-robots

Tonight at 7pm Aswin Raghavan will join us on Inspiration Dissemination. Tune in at 88.7 KBVR Corvallis or stream live here to learn about his project preparing for the robotic revolution (imagine household robots and self-driving cars)! Aswin isn’t worried about these machines coming to conquer humanity, in fact he’s hoping that increased use of Artificial Intelligence can make many aspects of human society more efficient.

As a fifth year PhD student  in computer science working under Dr. Tadepalli, Aswin doesn’t build robots or focus much on what the specific application of his A.I. will be- he works on ‘automated planning’. This means that Aswin develops algorithms used by computers in decision making processes. Computers running these decision making programs can more efficiently manage many human affairs, everything from coordinating traffic lights in your home town to running a loading dock full of multiple automated cranes.

From helping your local fire station more efficiently dispatch vehicle to “smart cities” that manage the provision of utilities to millions, artificial intelligence is on the rise! Join us tonight to find out how!

Giving the Cold Soul of a Machine a Burning Desire to Teach Your Children Well:

Tonight at 7 pm Pacific time on 88.7 KBVR Corvallis, Beatrice Moissinac comes into the studio at Inspiration Dissemination to talk about Artificial Intelligence and fire safety training. If you’re curious how those two subjects are related, tune in live or stream the episode here!

20130906170823-0_0_0

Illustration: Christine Daniloff/MIT, http://newsoffice.mit.edu/2013/center-for-brains-minds-and-machines-0909

A PhD student in Oregon State’s Computer Science program, Beatrice works underneath Prasad Tadepalli and collaborates with Enterprise Risk Services to design computer programs which guide students through a virtual fire safety training experience.

What kind of virtual training? As it turns out, Oregon State has an entire virtual campus dedicated to it in the online game Second Life (a virtual world that may or may not use more energy than some South American countries). Using one of the dorms in the second life version of OSU, Beatrice designs a training program that responds to individual students’ needs. Students are then immersed in a fully interactive virtual world where they learn what to do in the event that their dorm were to catch fire.

a-major-fire-involving-an-abandoned-convent-in-massueville-quebec-canada

http://www.zdnet.com/article/indoor-navigation-tracks-firefighters-in-blazing-buildings/

By analyzing what knowledge has not been learned, and by determining the best way to challenge the student, the artificial intelligence program is intended to provide a perfectly matched learning environment. This is crucial for training in something like fire safety, or other natural disasters, since training scenarios in real life could not be safely (or economically) constructed.

Beatrice is also the co-program manager for ChickTech Corvallis, a local chapter of the national non-profit group that organizes science and technology outreach and communications projects for high school girls. As a woman in computer science, a program that (at OSU) is still less than 10% female, Beatrice understands that the gender gap in science and technology studies is still very real in the United States. With here interests in both teaching and computer science combined, Beatrice continues to work for the academic benefit of the next generation. If she isn’t teaching a computer to teach people, then she’s teaching them herself!