Category Archives: Mechanical, Industrial, and Manufacturing Engineering

Global swarming: getting robot swarms to perform intelligently

This week we have a robotics PhD student, Everardo Gonzalez, joining us to discuss his research on coordinating robots with artificial intelligence (AI). That doesn’t mean he dresses them up in matching bow ties (sadly), but instead he works on how to get a large collective of robots, also called a swarm, to work collectively towards a shared goal. 

Why should we care about swarming robots? 

Aside from the potential for an apocalyptic robot world domination, there are actually many applications for this technology. Some are just as terrifying. It could be applied to fully automated warfare – reducing accountability when no one is to blame for pulling the trigger (literally).

However, it could also be used to coordinate robots used in healthcare and with organizing fleets of autonomous vehicles, potentially making our lives, and our streets, safer. In the case of the fish-inspired Blue Bots, this kind of coordinated robot system can also help us gather information about our oceans as we try to resolve climate change.

Depiction of how the fish-inspired Blue Bots can observe their surroundings in a shared aquatic space, then send that information and receive feedback from the computer system. Driving the Blue Bots’ behavior is a network model, as depicted in the Agent A square.

#Influencer

Having a group of intelligent robots behaving intelligently sounds like it’s a problem of quantity, however, it’s not that simple. These bots can also suffer from there being “too many cooks in the kitchen”, and, if all bots in the swarm are intelligent, they can start to hinder each other’s progress. Instead, the swarm needs both a few leader bots, that are intelligent and capable of learning and trying new things, along with follower bots, which can learn from their leader. Essentially, the bots play a game of “Follow the Leaders”.

All robots receive feedback with respect to a shared objective, which is typical of AI training and allow the bots to infer which behaviors are effective. In this case, the leaders will get additional feedback on how well they are influencing their followers. 

Unlike social media, one influencer with too many followers is a bad thing – and the bots can become ineffective. There’s a famous social experiment in which actors in a busy New York City street stopped to stare at a window to determine if strangers would do the same. If there are not enough actors staring at the window, strangers are unlikely to respond. But as the number of actors increases, the likeness of a stranger stopping to look will also increase. The bot swarms also have an optimal number of leaders required to have the largest influence on their followers. Perhaps we’re much more like robots than the Turing test would have us believe. 

Dot to dot

We’re a long way from intelligent robot swarms, though, as Everardo is using simplified 2D particle simulations to begin to tackle this problem. In this case the particles replace the robots, and are essentially just dots (rodots?) in a shared environment that only has two dimensions. The objectives or points of interest for these dot bots are more dots! Despite these simplifications, translating system feedback into a performance review for the leaders is still a challenging problem to solve computationally. Everardo starts by asking the question “what if the leader had not been there”, but then you have to ask “what if the followers that followed that leader did something else?” and then you’ve opened a can of worms reminiscent of Smash Mouth where the “what if”’s start coming and they don’t stop coming.

Everardo Gonzalez

What if you wanted to know more about swarming robots? Be sure to listen live on Sunday February 26th at 7PM on 88.7FM, or download the podcast if you missed it. To learn a bit more about Everardo’s work with swarms and all things robotics, check out his portfolio at everardog.github.io

AI that benefits humans and humanity

When you think about artificial intelligence or robots in the everyday household, your first thought might be that it sounds like science fiction – like something out of the 1999 cult classic film “Smart House”. But it’s likely you have some of this technology in your home already – if you own a Google Home, Amazon Alexa, Roomba, smart watch, or even just a smartphone, you’re already plugged into this network of AI in the home. The use of this technology can pose great benefits to its users, spanning from simply asking Google to set an alarm to wake you up the next day, to wearable smart devices that can collect health data such as heart rate. AI is also currently being used to improve assistive technology, or technology that is used to improve the lives of disabled or elderly individuals. However, the rapid explosion in development and popularity of this tech also brings risks to consumers: there isn’t great legislation yet about the privacy of, say, healthcare data collected by such devices. Further, as we discussed with another guest a few weeks ago, there is the issue of coding ethics into AI – how can we as humans program robots in such a way that they learn to operate in an ethical manner? Who defines what that is? And on the human side – how do we ensure that human users of such technology can actually trust them, especially if they will be used in a way that could benefit the user’s health and wellness?

Anna Nickelson, a fourth-year PhD student in Kagan Tumer’s lab in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute in the Department of Mechanical, Industrial and Manufacturing Engineering, joins us this week to discuss her research, which touches on several of these aspects regarding the use of technology as part of healthcare. Also a former Brookings Institute intern, Anna incorporates not just coding of robots but far-reaching policy and legislation goals into her work. Her research is driven by a very high level goal: how do we create AI that benefits humans and humanity?

Anna Nickelson, fourth year PhD student in the Collaborative Robotics and Intelligent Systems Institute.

AI for social good

When we think about how to create technology that is beneficial, Anna says that there are four major considerations in play. First is the creation of the technology itself – the hardware, the software; how technology is coded, how it’s built. The second is technologists and the technology industry – how do we think about and create technologies beyond the capitalist mindset of what will make the most money? Third is considering the general public’s role: what is the best way to educate people about things like privacy, the limitations and benefits of AI, and how to protect themselves from harm? Finally, she says we must also consider policy and legislation surrounding beneficial tech at all levels, from local ordinances to international guidelines. 

Anna’s current research with Dr. Tumer is funded by the NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING), an institute through the National Science Foundation that focuses on “personalized, longitudinal, collaborative AI, enabling the development of AI systems that learn personalized models of user behavior…and integrate that knowledge to support people and AIs working together”, as per their website. The institute is a collaboration between five universities, including Oregon State University and OHSU. What this looks like for Anna is lots of code writing and simulations studying how AI systems make trade-offs between different objectives.For this she looks at machine learning for decision making, and how multiple robots or AIs can work together towards a specific task without necessarily having to communicate with each other directly. For this she looks at machine learning for decision making in robots, and how multiple robots or AIs can work together towards a specific task without necessarily having to communicate with each other directly. Each robot or AI may have different considerations that factor into how they accomplish their objective, so part of her goal is to develop a framework for the different individuals to make decisions as part of a group.

With an undergraduate degree in math, a background in project management in the tech industry, engineering and coding skills, and experience working with a think tank in DC on tech-related policy, Anna is uniquely situated to address the major questions about development technology for social good in a way that mitigates risk. She came to graduate school at Oregon State with this interdisciplinary goal in mind. Her personal life goal is to get experience in each sector so she can bring in a wide range of perspectives and ideas. “There are quite a few people working on tech policy right now, but very few people have the breadth of perspective on it from the low level to the high level,” she says. 

If you are interested in hearing more about Anna’s life goals and the intersection of artificial intelligence, healthcare, and policy, join us live at 7 PM on Sunday, May 7th on https://kbvrfm.orangemedianetwork.com/, or after the show wherever you find your podcasts. 

I, Roboethicist

This week we have Colin Shea-Blymyer, a PhD student from OSU’s new AI program in the departments of Electrical Engineering and Computer Science, joining us to talk about coding computer ethics. Advancements in artificial intelligence (AI) are exploding, and while many of us are excited for a world where our Roomba’s evolve into Rosie’s (á la The Jetsons) – some of these technological advancements require grappling with ethical dilemmas. Determining how these AI technologies should make their decisions is a question that simply can’t be answered, and is best left to be debated by the spirits of John Stewart Mill and Immanual Kant. However, as a society, we are in dire need of a way to communicate ethics in a language that machines can understand – and this is exactly what Colin is developing.

Making An Impact: why coding computer ethics matters

A lot of AI is developed through machine learning – a process where software becomes more accurate without being explicitly told to do so. One example of this is through image recognition softwares. By feeding these algorithms with more and more photos of a cat – it will get better at recognizing what is and isn’t a cat. However, these algorithms are not perfect. How will the program treat a stuffed animal of a cat? How will it categorize the image of a cat on a t-shirt? When the stakes are low, like in image recognition, these errors may not matter as much. But for some technology being correct most of the time isn’t sufficient. We would simply not accept a pace-maker that operates correctly most of the time, or a plane that doesn’t crash into the mountains with just 95% certainty. Technologies that require a higher precision for safety also require a different approach to developing that software, and many applications of AI will require high safety standards – such as with self-driving cars or nursing robots. This means society is in need of a language to communicate with the AI in a way that it can understand ethics precisely, and with 100% accuracy. 
The Trolley Problem is a famous ethical dilemma that asks: if you are driving a trolley and see that it is going to hit and kill five pedestrians, but you could pull a lever to reroute the trolley to instead hit and kill one pedestrian – would you do it? While it seems obvious that we want our self-driving cars to not hit pedestrians, what is less obvious is what the car should do when it doesn’t have a choice but to hit and kill a pedestrian or to drive off a cliff killing the driver. Although Colin isn’t tackling the impossible feat of solving these ethical dilemmas, he is developing the language we need to communicate ethics to AI with the accuracy that we can’t achieve from machine learning. So who does decide how these robots will respond to ethical quandaries? While not part of Colin’s research, he believes this is best left answered by the communities the technologies will serve.

Colin doing a logical proof on a whiteboard with a 1/10 scale autonomous vehicle in the foreground.

The ArchIve: a (brief) history of AI

AI had its first wave in the 70’s, when it was thought that logic systems (a way of communicating directly with computers) would run AI. They also created perceptrons which try to mimic a neuron in a brain to put data into binary classes, but more importantly, has a very cool name. Perceptron! It sounds like a Spider-Man villain. However, logic and perceptrons turned out to not be particularly effective. There are a seemingly infinite number of possibilities and variables in the world, making it challenging to create a comprehensive code. Further, when AI has an incomprehensive code, it has the potential to enter a world it doesn’t know could even exist – and then it EXPLODES! Kind of. It enters a state known as the Principle of Explosion, where everything becomes true and chaos ensues. These challenges with using logic to develop AI led to the first “AI winter”. A highly relatable moment in history given the number of times I stop working and take a nap because a problem is too challenging. 

The second wave of AI blew up in the 80’s/90’s with the development of machine learning methods and in the mid-2000’s it really took off due to software that can handle matrix conversions rapidly. (And if that doesn’t mean anything to you, that’s okay. Just know that it basically means speedy complicated math could be achieved via computers). Additionally, high computational power means revisiting the first methods of the 70’s, and could string perceptrons together to form a neural network – moving from binary categorization to complex recognition.

A bIography: Colin’s road to coding computer ethics

During his undergrad at Virginia Tech studying computer science, Colin ran into an ArachnId that left him bitten by a philosophy bug. This led to one of many philosophical dilemmas he’d enjoy grappling with: whether to focus his studies on computer science or philosophy? And after reading I, Robot answered that question with a “yes”, finding a kindred spirit in the robopsychologist in the novel. This led to a future of combining computer science with philosophy and ethics: from his Master’s program where he weaved computer science into his philosophy lab’s research to his current project developing a language to communicate ethics to machines with his advisor Hassam Abbas. However, throughout his journey, Colin has become less of a robopsychologist and more of a roboethicist.

Want more information on coding computer ethics? Us too. Be sure to listen live on Sunday, April 17th at 7PM on 88.7FM, or download the podcast if you missed it. Want to stay up to date with the world of roboethics? Find more from Colin at https://web.engr.oregonstate.edu/~sheablyc/.

Colin Shea-Blymyer: PhD student of computer science and artificial intelligence at Oregon State University

This post was written by Bryan Lynn.

Not all robots are hard and made of metal…

Picture a robot. Seriously, close your eyes for 30 seconds and picture a robot in your head. Ok, most of you probably didn’t do it but if you had, my guess is that you would have pictured something very boxy, perhaps with pincher hands, quite awkward in its movements and perhaps with a weird robotic voice pre-Siri era. Or maybe something R2-D2 like. That’s definitely what comes to mind for me. Well, robots don’t all look like that. In fact, some robots aren’t hard and made of metal at all. Some are soft and pliable, and they’re the kind that Nick Bira studies.

Was a career in robotics always on the horizon for Nick? Perhaps…judging by this photo of him with his home-made robot, “Mr. Klanky”.

Nick is a 3rd year PhD student in the Department of Robotics working with Dr. Joseph Davidson. When asked to summarize his research into just a few words, Nick answered that he works on magnetism and soft robotics. What is soft robotics and why would we want a soft robot you may ask (I know I certainly did)? Well, soft robotics is exactly what the phrase implies – they’re robots that are soft, absolutely no hard parts (or very few) to them. Why would we want a soft robot? Well, imagine if you have a small space that you need a robot to fit through, like a small hole. A soft robot can mold into the shape that you need it to. Alternatively, soft robots are becoming more and more needed and used in medical robotics. After all, you don’t want some hard, klanky thing poking around inside of you and possibly causing damage. You’d much rather have something that’s soft, gentle, compliant and non-damaging. Another example is in instances of human-robot interactions and increasing the safety of such interactions. A big, metallic, hard robot on an assembly line could easily spin and injure a human. But a robot with arms designed like tentacles that are floppy and soft, will perhaps push you over and bruise you, but not lead to serious damage.

The utility of soft robotics is manifold. So why aren’t they used more or why haven’t you heard much of them before? Well, the challenge is how to keep the utility of a hard robot while making it soft and, by proxy, safe. In part, this is down to how the robot and its movements are controlled. Most soft robots to date are controlled by or pneumatics or hydraulics (using air or liquid pressure). The downside of these is that the soft robot has to be accompanied by bulky hard components, such as a pumps, electrical sources, batteries, or air tanks. So even though you may have this super soft, compliant robot, it comes with large apparatuses that are not soft. Kind of counter-intuitive. 

This is where the other half of Nick’s research phrase comes in – magnetism. Magnetism has very limited usage as a tool in soft robotics and Nick thinks it should be applied more. If you’re having a hard time picturing how a magnet could be used in soft robotics, then visualize this example Nick gave us. It could be used in a pincher – instead of using air pressure in inflate the pincers to open and close, you could have the fingers of the pincer be made out of stretch magnetic material that closes when exposed to a magnetic field. It seems pretty simple right? And yet, it doesn’t yet exist in soft robotics. This is why Nick is exploring this possibility because he believes ideas like this could be useful building blocks, and once we have them, we can build more complicated things. 

Now, you may be thinking, hang on, magnets are hard, I thought this was all about soft robotics? Good thought – here’s how Nick is planning to work around that. Nick is embedding iron particles, which are magnetically soft, into silicone rubber, which is a soft elastic material, to make a material that is soft and hyper elastic and when brought close to an ordinary magnet, will stick to it. However, this is only step 1. Nick is interested in creating magnetic fields within the robot rather than it only working if there is a big, hard magnet nearby. One core goal of soft robotics is to have them function on their own without needing some hard object nearby to ‘support’ it. He is still in the development and testing stages of this material, but Nick does have an application in mind. He wants to make a magneto-rheological fluid (MRF) valve that can be used in soft robots. Rather than have this valve open and shut with air pressure (which would require air tanks to accompany the robot), Nick wants the valve to open and close through a magnetic field generated by the elastic, soft magnetic material. This way everything would be compact, stretchy, and wouldn’t require any additional bulky parts.

To hear more about Nick’s research and also about his journey to OSU and more on his personal background, tune in on Sunday, February 16 at 7 PM on KBVR Corvallis 88.7 FM or stream live. Also, be sure to check out his Instagram (@nick_makes_stuff and @nick_bakes_stuff) and Twitter (@BiraNick) accounts. 

How many robots does it take to screw in a light bulb?

As technology continues to improve over the coming years, we are beginning to see increased integration of robotics into our daily lives. Imagine if these robots were capable of receiving general instructions regarding a task, and they were able to learn, work, and communicate as a team to complete that task with no additional guidance. Our guest this week on Inspiration Dissemination, Connor Yates a Robotics PhD student in the College of Engineering, studies artificial intelligence and machine learning and wants to make the above hypothetical scenario a reality. Connor and other members of the Autonomous Agents and Distributed Intelligence Laboratory are keenly interested in distributed reinforcement learning, optimization, and control in large complex robotics systems. Applications of this include multi-robot coordination, mobile robot navigation, transportation systems, and intelligent energy management.

Connor Yates.

A long time Beaver and native Oregonian, Connor grew up on the eastern side of the state. His father was a botanist, which naturally translated to a lot of time spent in the woods during his childhood. This, however, did not deter his aspirations of becoming a mechanical engineer building rockets for NASA. Fast forward to his first term of undergraduate here at Oregon State University—while taking his first mechanical engineering course, he realized rocket science wasn’t the academic field he wanted to pursue. After taking numerous different courses, one piqued his interest, computer science. He then went on to flourish in the computer science program eventually meeting his current Ph.D. advisor, Dr. Kagan Tumer. Connor worked with Dr. Tumer for two of his undergraduate years, and completed his undergraduate honors thesis investigating the improvement to gauge the intent of multiple robots working together in one system.

Connor taking in a view at Glacier National Park 2017.

Currently, Connor is working on improving the ability for machines to learn by implementing a reward system; think of a “good robot” and “bad robot” system. Using computer simulations, a robot can be assigned a general task. Robots usually begin learning a task with many failed attempts, but through the reward system, good behaviors can be enforced and behaviors that do not relate to the assigned task can be discouraged. Over thousands of trials, the robot eventually learns what to do and completes the task. Simple, right? However, this becomes incredibly more complex when a team of robots are assigned to learn a task. Connor focuses on rewarding not just successful completion an assigned task, but also progress toward completing the task. For example, say you have a table that requires six robots to move. When two robots attempt the task and fail, rather than just view it as a failed task, robots are capable of learning that two robots are not enough and recruit more robots until successful completion of the task. This is seen as a step wise progression toward success rather than an all or nothing type situation. It is Connor’s hope that one day in the future a robot team could not only complete a task but also report reasons why a decision was made to complete an assigned task.

In Connor’s free time he enjoys getting involved in the many PAC courses that are offered here at Oregon State University, getting outside, and trying to teach his household robot how to bring him a beer from the fridge.

Tune in to 88.7 FM at 7:00 PM Sunday evening to hear more about Connor and his research on artificial intelligence, or stream the program live.

A Softer Side of Robots

Do me a favor: close your eyes for a few seconds and think of a robot, any robot, real or imaginary.

Done? Good. Now, that robot you thought about, what did it look like? What did it do? What was it made of? The answers to the first two questions will likely be different from person to person: perhaps a utilitarian, cylindrical robot that helps with menial tasks like cleaning and homework, or a humanoid robot, hell-bent on crushing, killing, and/or destroying humans. I’m willing to bet, however, that the majority of the answers to the last question is one word: “metal”.

Most of our images of robots, droids, and automatons (i.e. R2-D2, The Cybermen, or Wall-E), including robots that we encounter in day to day life, are made of metal, but that might change in the future. The future of robotics is not simply to make robots harder, better, faster, or stronger, but also softer. For robots that must interact with humans and other living or delicate things, they must have the capacity to be gentile.

Samantha works on the jumping spider model that mimics a jumping spider by using an air hockey table with a tethered puck with a consistent starting speed

Samantha works on the jumping spider model that mimics a jumping spider by using an air hockey table with a tethered puck with a consistent starting speed

Researchers like Samantha Hemleben are beginning to explore the world of soft robotics, creating robots that are made out of soft materials, acting through changes in air pressure. These robots could be used for tasks where a light touch is needed to avoid bruising such as human contact or fruit picking. Currently, the technology to create soft robots involves making a 3D-printed mold and then casting the silicone robot parts in those molds. If you need a robot that has both soft and firm parts, it must be designed in separate steps, reducing efficiency and effectiveness.

This is where Samantha comes in; she’s trying to optimize this process. When she started her undergrad at Wofford College, she tried out Biology, Pharmacy, and Finance, but didn’t feel challenged by them. Switching to mathematics with a computer science emphasis allowed her creativity to flourish and she was able to secure a Research Experience for Undergraduates here at OSU, modeling a robot that mimics the movements of jumping spiders. This experience heavily influenced her decision to get her Ph. D. at OSU.

Samantha is now a 2nd year Ph. D. student of Drs. Cindy Grimm and Yiğit Mengüç in Robotics (School of Mechanical, Industrial, and Manufacturing Engineering). Her research is focused on trying to understand the gradient between hard and soft materials. That is, she’s creating mathematical models of this gradient so that the manufacturing process can be optimized, and soft robots will be able to stand on solid ground.

Tune in on Sunday, July 24th at 7PM PDT on 88.7FM or stream live at http://www.orangemedianetwork.com/kbvr_fm/

Teaching Old Factories New Tricks

There’s more than one way to s3172457kin a cat, but you can’t teach an old dog new tricks. This just about sums up the status of modern manufacturing. Although it may make an entertaining reality show, I don’t mean to imply that factories are trying to teach old dogs new ways to skin cats.

It used to be the manufacturing process was simple, design a part and pick a material to machine it out of. In the last decade or two, major breakthroughs in engineering have led to the development of drastically different manufacturing techniques. For example, additive manufacturing (e.g. 3D printing and friction welding) can reduce material waste while still yielding a part with the same strength and functionality as other methods. Although these new methods have caught the public’s attention, they don’t always transition into factories as quickly as one might expect.

Companies tend to be slow to adopt new techniques due to the cost of retooling and a lack of good comparisons between old and new methods. Working in Karl Haapala’s lab, Harsha Malshe hopes to bring some clarity to this process with a computer program that can help companies sort through all the new manufacturing options and compare them with the tried-and-true methods. The program Harsha is helping to build, along with his colleagues in the Haapala lab, will allow engineers to submit their part designs and get out a detailed comparison of all the manufacturing options IMG_0434for that part. Hopefully this information will encourage companies to embrace new manufacturing technologies that save money and resources, or maybe we’ll find out that the old dog already knows the best tricks. I’m guessing the answer lies somewhere in the middle.

We’ll be talking with Harsha on this week’s episode to learn more about the rapidly changing field of manufacturing engineering.

Printing Parts for Planes and Hearts

From medical implants to aerospace engineering, Ali Davar Panah is working with new technology in incremental forming (similar to 3D printing) that might allow thermoplastics and biodegradable polymers to be customized and produced for a variety of applications. Similar to dissolving stitches, items made from biopolymers could be of great medical value. Once in the body they would serve their purpose and dissolve entirely with no surgical removal required. Biopolymer printing would also be valuable for producing any number of disposable plastic items (coffee lids or plastic silverware, for example) which would decompose completely if buried. Because this type of incremental forming is a a room temperature operation, it is also useful for producing complex geometric surfaces made from heat sensitive plastics, such as those used on the insides of airplanes or space shuttles.

Ali is a doctoral student working underneath Dr. Malhotra in the Advanced Manufacturing program here in OSU’s Mechanical Engineering department. Tonight, tune in to 88.7FM KBVR Corvallis at 7PM PST, or stream the show live online at http://kbvr.com/listen to learn more about Ali’s work and his story!