Category Archives: Robotics

Global swarming: getting robot swarms to perform intelligently

This week we have a robotics PhD student, Everardo Gonzalez, joining us to discuss his research on coordinating robots with artificial intelligence (AI). That doesn’t mean he dresses them up in matching bow ties (sadly), but instead he works on how to get a large collective of robots, also called a swarm, to work collectively towards a shared goal. 

Why should we care about swarming robots? 

Aside from the potential for an apocalyptic robot world domination, there are actually many applications for this technology. Some are just as terrifying. It could be applied to fully automated warfare – reducing accountability when no one is to blame for pulling the trigger (literally).

However, it could also be used to coordinate robots used in healthcare and with organizing fleets of autonomous vehicles, potentially making our lives, and our streets, safer. In the case of the fish-inspired Blue Bots, this kind of coordinated robot system can also help us gather information about our oceans as we try to resolve climate change.

Depiction of how the fish-inspired Blue Bots can observe their surroundings in a shared aquatic space, then send that information and receive feedback from the computer system. Driving the Blue Bots’ behavior is a network model, as depicted in the Agent A square.

#Influencer

Having a group of intelligent robots behaving intelligently sounds like it’s a problem of quantity, however, it’s not that simple. These bots can also suffer from there being “too many cooks in the kitchen”, and, if all bots in the swarm are intelligent, they can start to hinder each other’s progress. Instead, the swarm needs both a few leader bots, that are intelligent and capable of learning and trying new things, along with follower bots, which can learn from their leader. Essentially, the bots play a game of “Follow the Leaders”.

All robots receive feedback with respect to a shared objective, which is typical of AI training and allow the bots to infer which behaviors are effective. In this case, the leaders will get additional feedback on how well they are influencing their followers. 

Unlike social media, one influencer with too many followers is a bad thing – and the bots can become ineffective. There’s a famous social experiment in which actors in a busy New York City street stopped to stare at a window to determine if strangers would do the same. If there are not enough actors staring at the window, strangers are unlikely to respond. But as the number of actors increases, the likeness of a stranger stopping to look will also increase. The bot swarms also have an optimal number of leaders required to have the largest influence on their followers. Perhaps we’re much more like robots than the Turing test would have us believe. 

Dot to dot

We’re a long way from intelligent robot swarms, though, as Everardo is using simplified 2D particle simulations to begin to tackle this problem. In this case the particles replace the robots, and are essentially just dots (rodots?) in a shared environment that only has two dimensions. The objectives or points of interest for these dot bots are more dots! Despite these simplifications, translating system feedback into a performance review for the leaders is still a challenging problem to solve computationally. Everardo starts by asking the question “what if the leader had not been there”, but then you have to ask “what if the followers that followed that leader did something else?” and then you’ve opened a can of worms reminiscent of Smash Mouth where the “what if”’s start coming and they don’t stop coming.

Everardo Gonzalez

What if you wanted to know more about swarming robots? Be sure to listen live on Sunday February 26th at 7PM on 88.7FM, or download the podcast if you missed it. To learn a bit more about Everardo’s work with swarms and all things robotics, check out his portfolio at everardog.github.io

AI that benefits humans and humanity

When you think about artificial intelligence or robots in the everyday household, your first thought might be that it sounds like science fiction – like something out of the 1999 cult classic film “Smart House”. But it’s likely you have some of this technology in your home already – if you own a Google Home, Amazon Alexa, Roomba, smart watch, or even just a smartphone, you’re already plugged into this network of AI in the home. The use of this technology can pose great benefits to its users, spanning from simply asking Google to set an alarm to wake you up the next day, to wearable smart devices that can collect health data such as heart rate. AI is also currently being used to improve assistive technology, or technology that is used to improve the lives of disabled or elderly individuals. However, the rapid explosion in development and popularity of this tech also brings risks to consumers: there isn’t great legislation yet about the privacy of, say, healthcare data collected by such devices. Further, as we discussed with another guest a few weeks ago, there is the issue of coding ethics into AI – how can we as humans program robots in such a way that they learn to operate in an ethical manner? Who defines what that is? And on the human side – how do we ensure that human users of such technology can actually trust them, especially if they will be used in a way that could benefit the user’s health and wellness?

Anna Nickelson, a fourth-year PhD student in Kagan Tumer’s lab in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute in the Department of Mechanical, Industrial and Manufacturing Engineering, joins us this week to discuss her research, which touches on several of these aspects regarding the use of technology as part of healthcare. Also a former Brookings Institute intern, Anna incorporates not just coding of robots but far-reaching policy and legislation goals into her work. Her research is driven by a very high level goal: how do we create AI that benefits humans and humanity?

Anna Nickelson, fourth year PhD student in the Collaborative Robotics and Intelligent Systems Institute.

AI for social good

When we think about how to create technology that is beneficial, Anna says that there are four major considerations in play. First is the creation of the technology itself – the hardware, the software; how technology is coded, how it’s built. The second is technologists and the technology industry – how do we think about and create technologies beyond the capitalist mindset of what will make the most money? Third is considering the general public’s role: what is the best way to educate people about things like privacy, the limitations and benefits of AI, and how to protect themselves from harm? Finally, she says we must also consider policy and legislation surrounding beneficial tech at all levels, from local ordinances to international guidelines. 

Anna’s current research with Dr. Tumer is funded by the NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING), an institute through the National Science Foundation that focuses on “personalized, longitudinal, collaborative AI, enabling the development of AI systems that learn personalized models of user behavior…and integrate that knowledge to support people and AIs working together”, as per their website. The institute is a collaboration between five universities, including Oregon State University and OHSU. What this looks like for Anna is lots of code writing and simulations studying how AI systems make trade-offs between different objectives.For this she looks at machine learning for decision making, and how multiple robots or AIs can work together towards a specific task without necessarily having to communicate with each other directly. For this she looks at machine learning for decision making in robots, and how multiple robots or AIs can work together towards a specific task without necessarily having to communicate with each other directly. Each robot or AI may have different considerations that factor into how they accomplish their objective, so part of her goal is to develop a framework for the different individuals to make decisions as part of a group.

With an undergraduate degree in math, a background in project management in the tech industry, engineering and coding skills, and experience working with a think tank in DC on tech-related policy, Anna is uniquely situated to address the major questions about development technology for social good in a way that mitigates risk. She came to graduate school at Oregon State with this interdisciplinary goal in mind. Her personal life goal is to get experience in each sector so she can bring in a wide range of perspectives and ideas. “There are quite a few people working on tech policy right now, but very few people have the breadth of perspective on it from the low level to the high level,” she says. 

If you are interested in hearing more about Anna’s life goals and the intersection of artificial intelligence, healthcare, and policy, join us live at 7 PM on Sunday, May 7th on https://kbvrfm.orangemedianetwork.com/, or after the show wherever you find your podcasts. 

I, Roboethicist

This week we have Colin Shea-Blymyer, a PhD student from OSU’s new AI program in the departments of Electrical Engineering and Computer Science, joining us to talk about coding computer ethics. Advancements in artificial intelligence (AI) are exploding, and while many of us are excited for a world where our Roomba’s evolve into Rosie’s (á la The Jetsons) – some of these technological advancements require grappling with ethical dilemmas. Determining how these AI technologies should make their decisions is a question that simply can’t be answered, and is best left to be debated by the spirits of John Stewart Mill and Immanual Kant. However, as a society, we are in dire need of a way to communicate ethics in a language that machines can understand – and this is exactly what Colin is developing.

Making An Impact: why coding computer ethics matters

A lot of AI is developed through machine learning – a process where software becomes more accurate without being explicitly told to do so. One example of this is through image recognition softwares. By feeding these algorithms with more and more photos of a cat – it will get better at recognizing what is and isn’t a cat. However, these algorithms are not perfect. How will the program treat a stuffed animal of a cat? How will it categorize the image of a cat on a t-shirt? When the stakes are low, like in image recognition, these errors may not matter as much. But for some technology being correct most of the time isn’t sufficient. We would simply not accept a pace-maker that operates correctly most of the time, or a plane that doesn’t crash into the mountains with just 95% certainty. Technologies that require a higher precision for safety also require a different approach to developing that software, and many applications of AI will require high safety standards – such as with self-driving cars or nursing robots. This means society is in need of a language to communicate with the AI in a way that it can understand ethics precisely, and with 100% accuracy. 
The Trolley Problem is a famous ethical dilemma that asks: if you are driving a trolley and see that it is going to hit and kill five pedestrians, but you could pull a lever to reroute the trolley to instead hit and kill one pedestrian – would you do it? While it seems obvious that we want our self-driving cars to not hit pedestrians, what is less obvious is what the car should do when it doesn’t have a choice but to hit and kill a pedestrian or to drive off a cliff killing the driver. Although Colin isn’t tackling the impossible feat of solving these ethical dilemmas, he is developing the language we need to communicate ethics to AI with the accuracy that we can’t achieve from machine learning. So who does decide how these robots will respond to ethical quandaries? While not part of Colin’s research, he believes this is best left answered by the communities the technologies will serve.

Colin doing a logical proof on a whiteboard with a 1/10 scale autonomous vehicle in the foreground.

The ArchIve: a (brief) history of AI

AI had its first wave in the 70’s, when it was thought that logic systems (a way of communicating directly with computers) would run AI. They also created perceptrons which try to mimic a neuron in a brain to put data into binary classes, but more importantly, has a very cool name. Perceptron! It sounds like a Spider-Man villain. However, logic and perceptrons turned out to not be particularly effective. There are a seemingly infinite number of possibilities and variables in the world, making it challenging to create a comprehensive code. Further, when AI has an incomprehensive code, it has the potential to enter a world it doesn’t know could even exist – and then it EXPLODES! Kind of. It enters a state known as the Principle of Explosion, where everything becomes true and chaos ensues. These challenges with using logic to develop AI led to the first “AI winter”. A highly relatable moment in history given the number of times I stop working and take a nap because a problem is too challenging. 

The second wave of AI blew up in the 80’s/90’s with the development of machine learning methods and in the mid-2000’s it really took off due to software that can handle matrix conversions rapidly. (And if that doesn’t mean anything to you, that’s okay. Just know that it basically means speedy complicated math could be achieved via computers). Additionally, high computational power means revisiting the first methods of the 70’s, and could string perceptrons together to form a neural network – moving from binary categorization to complex recognition.

A bIography: Colin’s road to coding computer ethics

During his undergrad at Virginia Tech studying computer science, Colin ran into an ArachnId that left him bitten by a philosophy bug. This led to one of many philosophical dilemmas he’d enjoy grappling with: whether to focus his studies on computer science or philosophy? And after reading I, Robot answered that question with a “yes”, finding a kindred spirit in the robopsychologist in the novel. This led to a future of combining computer science with philosophy and ethics: from his Master’s program where he weaved computer science into his philosophy lab’s research to his current project developing a language to communicate ethics to machines with his advisor Hassam Abbas. However, throughout his journey, Colin has become less of a robopsychologist and more of a roboethicist.

Want more information on coding computer ethics? Us too. Be sure to listen live on Sunday, April 17th at 7PM on 88.7FM, or download the podcast if you missed it. Want to stay up to date with the world of roboethics? Find more from Colin at https://web.engr.oregonstate.edu/~sheablyc/.

Colin Shea-Blymyer: PhD student of computer science and artificial intelligence at Oregon State University

This post was written by Bryan Lynn.

How many robots does it take to screw in a light bulb?

As technology continues to improve over the coming years, we are beginning to see increased integration of robotics into our daily lives. Imagine if these robots were capable of receiving general instructions regarding a task, and they were able to learn, work, and communicate as a team to complete that task with no additional guidance. Our guest this week on Inspiration Dissemination, Connor Yates a Robotics PhD student in the College of Engineering, studies artificial intelligence and machine learning and wants to make the above hypothetical scenario a reality. Connor and other members of the Autonomous Agents and Distributed Intelligence Laboratory are keenly interested in distributed reinforcement learning, optimization, and control in large complex robotics systems. Applications of this include multi-robot coordination, mobile robot navigation, transportation systems, and intelligent energy management.

Connor Yates.

A long time Beaver and native Oregonian, Connor grew up on the eastern side of the state. His father was a botanist, which naturally translated to a lot of time spent in the woods during his childhood. This, however, did not deter his aspirations of becoming a mechanical engineer building rockets for NASA. Fast forward to his first term of undergraduate here at Oregon State University—while taking his first mechanical engineering course, he realized rocket science wasn’t the academic field he wanted to pursue. After taking numerous different courses, one piqued his interest, computer science. He then went on to flourish in the computer science program eventually meeting his current Ph.D. advisor, Dr. Kagan Tumer. Connor worked with Dr. Tumer for two of his undergraduate years, and completed his undergraduate honors thesis investigating the improvement to gauge the intent of multiple robots working together in one system.

Connor taking in a view at Glacier National Park 2017.

Currently, Connor is working on improving the ability for machines to learn by implementing a reward system; think of a “good robot” and “bad robot” system. Using computer simulations, a robot can be assigned a general task. Robots usually begin learning a task with many failed attempts, but through the reward system, good behaviors can be enforced and behaviors that do not relate to the assigned task can be discouraged. Over thousands of trials, the robot eventually learns what to do and completes the task. Simple, right? However, this becomes incredibly more complex when a team of robots are assigned to learn a task. Connor focuses on rewarding not just successful completion an assigned task, but also progress toward completing the task. For example, say you have a table that requires six robots to move. When two robots attempt the task and fail, rather than just view it as a failed task, robots are capable of learning that two robots are not enough and recruit more robots until successful completion of the task. This is seen as a step wise progression toward success rather than an all or nothing type situation. It is Connor’s hope that one day in the future a robot team could not only complete a task but also report reasons why a decision was made to complete an assigned task.

In Connor’s free time he enjoys getting involved in the many PAC courses that are offered here at Oregon State University, getting outside, and trying to teach his household robot how to bring him a beer from the fridge.

Tune in to 88.7 FM at 7:00 PM Sunday evening to hear more about Connor and his research on artificial intelligence, or stream the program live.

A Softer Side of Robots

Do me a favor: close your eyes for a few seconds and think of a robot, any robot, real or imaginary.

Done? Good. Now, that robot you thought about, what did it look like? What did it do? What was it made of? The answers to the first two questions will likely be different from person to person: perhaps a utilitarian, cylindrical robot that helps with menial tasks like cleaning and homework, or a humanoid robot, hell-bent on crushing, killing, and/or destroying humans. I’m willing to bet, however, that the majority of the answers to the last question is one word: “metal”.

Most of our images of robots, droids, and automatons (i.e. R2-D2, The Cybermen, or Wall-E), including robots that we encounter in day to day life, are made of metal, but that might change in the future. The future of robotics is not simply to make robots harder, better, faster, or stronger, but also softer. For robots that must interact with humans and other living or delicate things, they must have the capacity to be gentile.

Samantha works on the jumping spider model that mimics a jumping spider by using an air hockey table with a tethered puck with a consistent starting speed

Samantha works on the jumping spider model that mimics a jumping spider by using an air hockey table with a tethered puck with a consistent starting speed

Researchers like Samantha Hemleben are beginning to explore the world of soft robotics, creating robots that are made out of soft materials, acting through changes in air pressure. These robots could be used for tasks where a light touch is needed to avoid bruising such as human contact or fruit picking. Currently, the technology to create soft robots involves making a 3D-printed mold and then casting the silicone robot parts in those molds. If you need a robot that has both soft and firm parts, it must be designed in separate steps, reducing efficiency and effectiveness.

This is where Samantha comes in; she’s trying to optimize this process. When she started her undergrad at Wofford College, she tried out Biology, Pharmacy, and Finance, but didn’t feel challenged by them. Switching to mathematics with a computer science emphasis allowed her creativity to flourish and she was able to secure a Research Experience for Undergraduates here at OSU, modeling a robot that mimics the movements of jumping spiders. This experience heavily influenced her decision to get her Ph. D. at OSU.

Samantha is now a 2nd year Ph. D. student of Drs. Cindy Grimm and Yiğit Mengüç in Robotics (School of Mechanical, Industrial, and Manufacturing Engineering). Her research is focused on trying to understand the gradient between hard and soft materials. That is, she’s creating mathematical models of this gradient so that the manufacturing process can be optimized, and soft robots will be able to stand on solid ground.

Tune in on Sunday, July 24th at 7PM PDT on 88.7FM or stream live at http://www.orangemedianetwork.com/kbvr_fm/