Tag Archives: Artificial Intelligence

AI that benefits humans and humanity

When you think about artificial intelligence or robots in the everyday household, your first thought might be that it sounds like science fiction – like something out of the 1999 cult classic film “Smart House”. But it’s likely you have some of this technology in your home already – if you own a Google Home, Amazon Alexa, Roomba, smart watch, or even just a smartphone, you’re already plugged into this network of AI in the home. The use of this technology can pose great benefits to its users, spanning from simply asking Google to set an alarm to wake you up the next day, to wearable smart devices that can collect health data such as heart rate. AI is also currently being used to improve assistive technology, or technology that is used to improve the lives of disabled or elderly individuals. However, the rapid explosion in development and popularity of this tech also brings risks to consumers: there isn’t great legislation yet about the privacy of, say, healthcare data collected by such devices. Further, as we discussed with another guest a few weeks ago, there is the issue of coding ethics into AI – how can we as humans program robots in such a way that they learn to operate in an ethical manner? Who defines what that is? And on the human side – how do we ensure that human users of such technology can actually trust them, especially if they will be used in a way that could benefit the user’s health and wellness?

Anna Nickelson, a fourth-year PhD student in Kagan Tumer’s lab in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute in the Department of Mechanical, Industrial and Manufacturing Engineering, joins us this week to discuss her research, which touches on several of these aspects regarding the use of technology as part of healthcare. Also a former Brookings Institute intern, Anna incorporates not just coding of robots but far-reaching policy and legislation goals into her work. Her research is driven by a very high level goal: how do we create AI that benefits humans and humanity?

Anna Nickelson, fourth year PhD student in the Collaborative Robotics and Intelligent Systems Institute.

AI for social good

When we think about how to create technology that is beneficial, Anna says that there are four major considerations in play. First is the creation of the technology itself – the hardware, the software; how technology is coded, how it’s built. The second is technologists and the technology industry – how do we think about and create technologies beyond the capitalist mindset of what will make the most money? Third is considering the general public’s role: what is the best way to educate people about things like privacy, the limitations and benefits of AI, and how to protect themselves from harm? Finally, she says we must also consider policy and legislation surrounding beneficial tech at all levels, from local ordinances to international guidelines. 

Anna’s current research with Dr. Tumer is funded by the NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING), an institute through the National Science Foundation that focuses on “personalized, longitudinal, collaborative AI, enabling the development of AI systems that learn personalized models of user behavior…and integrate that knowledge to support people and AIs working together”, as per their website. The institute is a collaboration between five universities, including Oregon State University and OHSU. What this looks like for Anna is lots of code writing and simulations studying how AI systems make trade-offs between different objectives.For this she looks at machine learning for decision making, and how multiple robots or AIs can work together towards a specific task without necessarily having to communicate with each other directly. For this she looks at machine learning for decision making in robots, and how multiple robots or AIs can work together towards a specific task without necessarily having to communicate with each other directly. Each robot or AI may have different considerations that factor into how they accomplish their objective, so part of her goal is to develop a framework for the different individuals to make decisions as part of a group.

With an undergraduate degree in math, a background in project management in the tech industry, engineering and coding skills, and experience working with a think tank in DC on tech-related policy, Anna is uniquely situated to address the major questions about development technology for social good in a way that mitigates risk. She came to graduate school at Oregon State with this interdisciplinary goal in mind. Her personal life goal is to get experience in each sector so she can bring in a wide range of perspectives and ideas. “There are quite a few people working on tech policy right now, but very few people have the breadth of perspective on it from the low level to the high level,” she says. 

If you are interested in hearing more about Anna’s life goals and the intersection of artificial intelligence, healthcare, and policy, join us live at 7 PM on Sunday, May 7th on https://kbvrfm.orangemedianetwork.com/, or after the show wherever you find your podcasts. 

I, Roboethicist

This week we have Colin Shea-Blymyer, a PhD student from OSU’s new AI program in the departments of Electrical Engineering and Computer Science, joining us to talk about coding computer ethics. Advancements in artificial intelligence (AI) are exploding, and while many of us are excited for a world where our Roomba’s evolve into Rosie’s (á la The Jetsons) – some of these technological advancements require grappling with ethical dilemmas. Determining how these AI technologies should make their decisions is a question that simply can’t be answered, and is best left to be debated by the spirits of John Stewart Mill and Immanual Kant. However, as a society, we are in dire need of a way to communicate ethics in a language that machines can understand – and this is exactly what Colin is developing.

Making An Impact: why coding computer ethics matters

A lot of AI is developed through machine learning – a process where software becomes more accurate without being explicitly told to do so. One example of this is through image recognition softwares. By feeding these algorithms with more and more photos of a cat – it will get better at recognizing what is and isn’t a cat. However, these algorithms are not perfect. How will the program treat a stuffed animal of a cat? How will it categorize the image of a cat on a t-shirt? When the stakes are low, like in image recognition, these errors may not matter as much. But for some technology being correct most of the time isn’t sufficient. We would simply not accept a pace-maker that operates correctly most of the time, or a plane that doesn’t crash into the mountains with just 95% certainty. Technologies that require a higher precision for safety also require a different approach to developing that software, and many applications of AI will require high safety standards – such as with self-driving cars or nursing robots. This means society is in need of a language to communicate with the AI in a way that it can understand ethics precisely, and with 100% accuracy. 
The Trolley Problem is a famous ethical dilemma that asks: if you are driving a trolley and see that it is going to hit and kill five pedestrians, but you could pull a lever to reroute the trolley to instead hit and kill one pedestrian – would you do it? While it seems obvious that we want our self-driving cars to not hit pedestrians, what is less obvious is what the car should do when it doesn’t have a choice but to hit and kill a pedestrian or to drive off a cliff killing the driver. Although Colin isn’t tackling the impossible feat of solving these ethical dilemmas, he is developing the language we need to communicate ethics to AI with the accuracy that we can’t achieve from machine learning. So who does decide how these robots will respond to ethical quandaries? While not part of Colin’s research, he believes this is best left answered by the communities the technologies will serve.

Colin doing a logical proof on a whiteboard with a 1/10 scale autonomous vehicle in the foreground.

The ArchIve: a (brief) history of AI

AI had its first wave in the 70’s, when it was thought that logic systems (a way of communicating directly with computers) would run AI. They also created perceptrons which try to mimic a neuron in a brain to put data into binary classes, but more importantly, has a very cool name. Perceptron! It sounds like a Spider-Man villain. However, logic and perceptrons turned out to not be particularly effective. There are a seemingly infinite number of possibilities and variables in the world, making it challenging to create a comprehensive code. Further, when AI has an incomprehensive code, it has the potential to enter a world it doesn’t know could even exist – and then it EXPLODES! Kind of. It enters a state known as the Principle of Explosion, where everything becomes true and chaos ensues. These challenges with using logic to develop AI led to the first “AI winter”. A highly relatable moment in history given the number of times I stop working and take a nap because a problem is too challenging. 

The second wave of AI blew up in the 80’s/90’s with the development of machine learning methods and in the mid-2000’s it really took off due to software that can handle matrix conversions rapidly. (And if that doesn’t mean anything to you, that’s okay. Just know that it basically means speedy complicated math could be achieved via computers). Additionally, high computational power means revisiting the first methods of the 70’s, and could string perceptrons together to form a neural network – moving from binary categorization to complex recognition.

A bIography: Colin’s road to coding computer ethics

During his undergrad at Virginia Tech studying computer science, Colin ran into an ArachnId that left him bitten by a philosophy bug. This led to one of many philosophical dilemmas he’d enjoy grappling with: whether to focus his studies on computer science or philosophy? And after reading I, Robot answered that question with a “yes”, finding a kindred spirit in the robopsychologist in the novel. This led to a future of combining computer science with philosophy and ethics: from his Master’s program where he weaved computer science into his philosophy lab’s research to his current project developing a language to communicate ethics to machines with his advisor Hassam Abbas. However, throughout his journey, Colin has become less of a robopsychologist and more of a roboethicist.

Want more information on coding computer ethics? Us too. Be sure to listen live on Sunday, April 17th at 7PM on 88.7FM, or download the podcast if you missed it. Want to stay up to date with the world of roboethics? Find more from Colin at https://web.engr.oregonstate.edu/~sheablyc/.

Colin Shea-Blymyer: PhD student of computer science and artificial intelligence at Oregon State University

This post was written by Bryan Lynn.

Learning without a brain

Instructions for how to win a soccer game:

Score more goals than your opponent.

Sounds simple, but these instructions don’t begin to explain the complexity of soccer and are useless without knowledge of the rules of soccer or how a “goal” is “scored.” Cataloging the numerous variables and situations to win at soccer is impossible and even having all that information will not guarantee a win. Soccer takes teamwork and practice.

Researchers in robotics are trying to figure out how to make a robot learn behaviors in games such as soccer, which require collaborative and/or competitive behaviors.

How then would you teach a group of robots to play soccer? Robots don’t have “bodies,” and instructions based on human body movement are irrelevant. Robots can’t watch a game and later try some fancy footwork. Robots can’t understand English unless they are designed to. How would the robots communicate with each other on the field? If a robot team did win a soccer game, how would they know?

Multiple robot systems are already a reality in automated warehouses.

Although this is merely an illustrative example, these are the types of challenges encountered by folks working to design robots to accomplish specific tasks. The main tool for teaching a robot to do anything is machine learning. With machine learning, a roboticist can give a robot limited instructions for a task, the robot can attempt a task many times, and the roboticist can reward the robot when the task is performed successfully. This allows the robot to learn how to successfully accomplish the task and use that experience to further improve. In our soccer example, the robot team is rewarded when they score a goal, and they can get better at scoring goals and winning games.

Programming machines to automatically learn collaborative skills is very hard because the outcome depends on not only what one robot did, but what all other robots did; thus it is hard to learn who contributed the most and in what way.

Our guest this week, Yathartha Tuladhar, a PhD student studying Robotics in the College of Engineering, is focused on improving multi-robot coordination. He is investigating both how to effectively reward robots and how robot-to-robot communication can increase success. Fun fact: robots don’t use human language communication. Roboticists define a limited vocabulary of numbers or letters that can become words and allow the robots to learn their own language. Not even the roboticist will be able to decode the communication!

 

Human-Robot collaborative teams will play a crucial role in the future of search and rescue.

Yathartha is from Nepal and became interested in electrical engineering as a career that would aid infrastructure development in his country. After getting a scholarship to study electrical engineering in the US at University of Texas Arlington, he learned that electrical engineering is more than developing networks and helping buildings run on electricity. He found electrical engineering is about discovery, creation, trial, and error. Ultimately, it was an experience volunteering in a robotics lab as an undergraduate that led him to where he is today.

Tune in on Sunday at 7pm and be ready for some mind-blowing information about robots and machine learning. Listen locally to 88.7FM, stream the show live, or check out our podcast.

How many robots does it take to screw in a light bulb?

As technology continues to improve over the coming years, we are beginning to see increased integration of robotics into our daily lives. Imagine if these robots were capable of receiving general instructions regarding a task, and they were able to learn, work, and communicate as a team to complete that task with no additional guidance. Our guest this week on Inspiration Dissemination, Connor Yates a Robotics PhD student in the College of Engineering, studies artificial intelligence and machine learning and wants to make the above hypothetical scenario a reality. Connor and other members of the Autonomous Agents and Distributed Intelligence Laboratory are keenly interested in distributed reinforcement learning, optimization, and control in large complex robotics systems. Applications of this include multi-robot coordination, mobile robot navigation, transportation systems, and intelligent energy management.

Connor Yates.

A long time Beaver and native Oregonian, Connor grew up on the eastern side of the state. His father was a botanist, which naturally translated to a lot of time spent in the woods during his childhood. This, however, did not deter his aspirations of becoming a mechanical engineer building rockets for NASA. Fast forward to his first term of undergraduate here at Oregon State University—while taking his first mechanical engineering course, he realized rocket science wasn’t the academic field he wanted to pursue. After taking numerous different courses, one piqued his interest, computer science. He then went on to flourish in the computer science program eventually meeting his current Ph.D. advisor, Dr. Kagan Tumer. Connor worked with Dr. Tumer for two of his undergraduate years, and completed his undergraduate honors thesis investigating the improvement to gauge the intent of multiple robots working together in one system.

Connor taking in a view at Glacier National Park 2017.

Currently, Connor is working on improving the ability for machines to learn by implementing a reward system; think of a “good robot” and “bad robot” system. Using computer simulations, a robot can be assigned a general task. Robots usually begin learning a task with many failed attempts, but through the reward system, good behaviors can be enforced and behaviors that do not relate to the assigned task can be discouraged. Over thousands of trials, the robot eventually learns what to do and completes the task. Simple, right? However, this becomes incredibly more complex when a team of robots are assigned to learn a task. Connor focuses on rewarding not just successful completion an assigned task, but also progress toward completing the task. For example, say you have a table that requires six robots to move. When two robots attempt the task and fail, rather than just view it as a failed task, robots are capable of learning that two robots are not enough and recruit more robots until successful completion of the task. This is seen as a step wise progression toward success rather than an all or nothing type situation. It is Connor’s hope that one day in the future a robot team could not only complete a task but also report reasons why a decision was made to complete an assigned task.

In Connor’s free time he enjoys getting involved in the many PAC courses that are offered here at Oregon State University, getting outside, and trying to teach his household robot how to bring him a beer from the fridge.

Tune in to 88.7 FM at 7:00 PM Sunday evening to hear more about Connor and his research on artificial intelligence, or stream the program live.