Learning without a brain

Instructions for how to win a soccer game:

Score more goals than your opponent.

Sounds simple, but these instructions don’t begin to explain the complexity of soccer and are useless without knowledge of the rules of soccer or how a “goal” is “scored.” Cataloging the numerous variables and situations to win at soccer is impossible and even having all that information will not guarantee a win. Soccer takes teamwork and practice.

Researchers in robotics are trying to figure out how to make a robot learn behaviors in games such as soccer, which require collaborative and/or competitive behaviors.

How then would you teach a group of robots to play soccer? Robots don’t have “bodies,” and instructions based on human body movement are irrelevant. Robots can’t watch a game and later try some fancy footwork. Robots can’t understand English unless they are designed to. How would the robots communicate with each other on the field? If a robot team did win a soccer game, how would they know?

Multiple robot systems are already a reality in automated warehouses.

Although this is merely an illustrative example, these are the types of challenges encountered by folks working to design robots to accomplish specific tasks. The main tool for teaching a robot to do anything is machine learning. With machine learning, a roboticist can give a robot limited instructions for a task, the robot can attempt a task many times, and the roboticist can reward the robot when the task is performed successfully. This allows the robot to learn how to successfully accomplish the task and use that experience to further improve. In our soccer example, the robot team is rewarded when they score a goal, and they can get better at scoring goals and winning games.

Programming machines to automatically learn collaborative skills is very hard because the outcome depends on not only what one robot did, but what all other robots did; thus it is hard to learn who contributed the most and in what way.

Our guest this week, Yathartha Tuladhar, a PhD student studying Robotics in the College of Engineering, is focused on improving multi-robot coordination. He is investigating both how to effectively reward robots and how robot-to-robot communication can increase success. Fun fact: robots don’t use human language communication. Roboticists define a limited vocabulary of numbers or letters that can become words and allow the robots to learn their own language. Not even the roboticist will be able to decode the communication!

 

Human-Robot collaborative teams will play a crucial role in the future of search and rescue.

Yathartha is from Nepal and became interested in electrical engineering as a career that would aid infrastructure development in his country. After getting a scholarship to study electrical engineering in the US at University of Texas Arlington, he learned that electrical engineering is more than developing networks and helping buildings run on electricity. He found electrical engineering is about discovery, creation, trial, and error. Ultimately, it was an experience volunteering in a robotics lab as an undergraduate that led him to where he is today.

Tune in on Sunday at 7pm and be ready for some mind-blowing information about robots and machine learning. Listen locally to 88.7FM, stream the show live, or check out our podcast.

Print Friendly, PDF & Email