Tag Archives: Robotics

Global swarming: getting robot swarms to perform intelligently

This week we have a robotics PhD student, Everardo Gonzalez, joining us to discuss his research on coordinating robots with artificial intelligence (AI). That doesn’t mean he dresses them up in matching bow ties (sadly), but instead he works on how to get a large collective of robots, also called a swarm, to work collectively towards a shared goal. 

Why should we care about swarming robots? 

Aside from the potential for an apocalyptic robot world domination, there are actually many applications for this technology. Some are just as terrifying. It could be applied to fully automated warfare – reducing accountability when no one is to blame for pulling the trigger (literally).

However, it could also be used to coordinate robots used in healthcare and with organizing fleets of autonomous vehicles, potentially making our lives, and our streets, safer. In the case of the fish-inspired Blue Bots, this kind of coordinated robot system can also help us gather information about our oceans as we try to resolve climate change.

Depiction of how the fish-inspired Blue Bots can observe their surroundings in a shared aquatic space, then send that information and receive feedback from the computer system. Driving the Blue Bots’ behavior is a network model, as depicted in the Agent A square.

#Influencer

Having a group of intelligent robots behaving intelligently sounds like it’s a problem of quantity, however, it’s not that simple. These bots can also suffer from there being “too many cooks in the kitchen”, and, if all bots in the swarm are intelligent, they can start to hinder each other’s progress. Instead, the swarm needs both a few leader bots, that are intelligent and capable of learning and trying new things, along with follower bots, which can learn from their leader. Essentially, the bots play a game of “Follow the Leaders”.

All robots receive feedback with respect to a shared objective, which is typical of AI training and allow the bots to infer which behaviors are effective. In this case, the leaders will get additional feedback on how well they are influencing their followers. 

Unlike social media, one influencer with too many followers is a bad thing – and the bots can become ineffective. There’s a famous social experiment in which actors in a busy New York City street stopped to stare at a window to determine if strangers would do the same. If there are not enough actors staring at the window, strangers are unlikely to respond. But as the number of actors increases, the likeness of a stranger stopping to look will also increase. The bot swarms also have an optimal number of leaders required to have the largest influence on their followers. Perhaps we’re much more like robots than the Turing test would have us believe. 

Dot to dot

We’re a long way from intelligent robot swarms, though, as Everardo is using simplified 2D particle simulations to begin to tackle this problem. In this case the particles replace the robots, and are essentially just dots (rodots?) in a shared environment that only has two dimensions. The objectives or points of interest for these dot bots are more dots! Despite these simplifications, translating system feedback into a performance review for the leaders is still a challenging problem to solve computationally. Everardo starts by asking the question “what if the leader had not been there”, but then you have to ask “what if the followers that followed that leader did something else?” and then you’ve opened a can of worms reminiscent of Smash Mouth where the “what if”’s start coming and they don’t stop coming.

Everardo Gonzalez

What if you wanted to know more about swarming robots? Be sure to listen live on Sunday February 26th at 7PM on 88.7FM, or download the podcast if you missed it. To learn a bit more about Everardo’s work with swarms and all things robotics, check out his portfolio at everardog.github.io

Learning without a brain

Instructions for how to win a soccer game:

Score more goals than your opponent.

Sounds simple, but these instructions don’t begin to explain the complexity of soccer and are useless without knowledge of the rules of soccer or how a “goal” is “scored.” Cataloging the numerous variables and situations to win at soccer is impossible and even having all that information will not guarantee a win. Soccer takes teamwork and practice.

Researchers in robotics are trying to figure out how to make a robot learn behaviors in games such as soccer, which require collaborative and/or competitive behaviors.

How then would you teach a group of robots to play soccer? Robots don’t have “bodies,” and instructions based on human body movement are irrelevant. Robots can’t watch a game and later try some fancy footwork. Robots can’t understand English unless they are designed to. How would the robots communicate with each other on the field? If a robot team did win a soccer game, how would they know?

Multiple robot systems are already a reality in automated warehouses.

Although this is merely an illustrative example, these are the types of challenges encountered by folks working to design robots to accomplish specific tasks. The main tool for teaching a robot to do anything is machine learning. With machine learning, a roboticist can give a robot limited instructions for a task, the robot can attempt a task many times, and the roboticist can reward the robot when the task is performed successfully. This allows the robot to learn how to successfully accomplish the task and use that experience to further improve. In our soccer example, the robot team is rewarded when they score a goal, and they can get better at scoring goals and winning games.

Programming machines to automatically learn collaborative skills is very hard because the outcome depends on not only what one robot did, but what all other robots did; thus it is hard to learn who contributed the most and in what way.

Our guest this week, Yathartha Tuladhar, a PhD student studying Robotics in the College of Engineering, is focused on improving multi-robot coordination. He is investigating both how to effectively reward robots and how robot-to-robot communication can increase success. Fun fact: robots don’t use human language communication. Roboticists define a limited vocabulary of numbers or letters that can become words and allow the robots to learn their own language. Not even the roboticist will be able to decode the communication!

 

Human-Robot collaborative teams will play a crucial role in the future of search and rescue.

Yathartha is from Nepal and became interested in electrical engineering as a career that would aid infrastructure development in his country. After getting a scholarship to study electrical engineering in the US at University of Texas Arlington, he learned that electrical engineering is more than developing networks and helping buildings run on electricity. He found electrical engineering is about discovery, creation, trial, and error. Ultimately, it was an experience volunteering in a robotics lab as an undergraduate that led him to where he is today.

Tune in on Sunday at 7pm and be ready for some mind-blowing information about robots and machine learning. Listen locally to 88.7FM, stream the show live, or check out our podcast.

How many robots does it take to screw in a light bulb?

As technology continues to improve over the coming years, we are beginning to see increased integration of robotics into our daily lives. Imagine if these robots were capable of receiving general instructions regarding a task, and they were able to learn, work, and communicate as a team to complete that task with no additional guidance. Our guest this week on Inspiration Dissemination, Connor Yates a Robotics PhD student in the College of Engineering, studies artificial intelligence and machine learning and wants to make the above hypothetical scenario a reality. Connor and other members of the Autonomous Agents and Distributed Intelligence Laboratory are keenly interested in distributed reinforcement learning, optimization, and control in large complex robotics systems. Applications of this include multi-robot coordination, mobile robot navigation, transportation systems, and intelligent energy management.

Connor Yates.

A long time Beaver and native Oregonian, Connor grew up on the eastern side of the state. His father was a botanist, which naturally translated to a lot of time spent in the woods during his childhood. This, however, did not deter his aspirations of becoming a mechanical engineer building rockets for NASA. Fast forward to his first term of undergraduate here at Oregon State University—while taking his first mechanical engineering course, he realized rocket science wasn’t the academic field he wanted to pursue. After taking numerous different courses, one piqued his interest, computer science. He then went on to flourish in the computer science program eventually meeting his current Ph.D. advisor, Dr. Kagan Tumer. Connor worked with Dr. Tumer for two of his undergraduate years, and completed his undergraduate honors thesis investigating the improvement to gauge the intent of multiple robots working together in one system.

Connor taking in a view at Glacier National Park 2017.

Currently, Connor is working on improving the ability for machines to learn by implementing a reward system; think of a “good robot” and “bad robot” system. Using computer simulations, a robot can be assigned a general task. Robots usually begin learning a task with many failed attempts, but through the reward system, good behaviors can be enforced and behaviors that do not relate to the assigned task can be discouraged. Over thousands of trials, the robot eventually learns what to do and completes the task. Simple, right? However, this becomes incredibly more complex when a team of robots are assigned to learn a task. Connor focuses on rewarding not just successful completion an assigned task, but also progress toward completing the task. For example, say you have a table that requires six robots to move. When two robots attempt the task and fail, rather than just view it as a failed task, robots are capable of learning that two robots are not enough and recruit more robots until successful completion of the task. This is seen as a step wise progression toward success rather than an all or nothing type situation. It is Connor’s hope that one day in the future a robot team could not only complete a task but also report reasons why a decision was made to complete an assigned task.

In Connor’s free time he enjoys getting involved in the many PAC courses that are offered here at Oregon State University, getting outside, and trying to teach his household robot how to bring him a beer from the fridge.

Tune in to 88.7 FM at 7:00 PM Sunday evening to hear more about Connor and his research on artificial intelligence, or stream the program live.