{"id":2287,"date":"2022-04-15T14:37:59","date_gmt":"2022-04-15T21:37:59","guid":{"rendered":"https:\/\/blogs.oregonstate.edu\/inspiration\/?p=2287"},"modified":"2022-04-27T19:43:40","modified_gmt":"2022-04-28T02:43:40","slug":"i-roboethicist","status":"publish","type":"post","link":"https:\/\/blogs.oregonstate.edu\/inspiration\/2022\/04\/15\/i-roboethicist\/","title":{"rendered":"I, Roboethicist"},"content":{"rendered":"\n<p>This week we have <a href=\"https:\/\/web.engr.oregonstate.edu\/~sheablyc\/\">Colin Shea-Blymyer<\/a>, a PhD student from OSU\u2019s new AI program in the departments of Electrical Engineering and Computer Science, joining us to talk about coding computer ethics. Advancements in artificial intelligence (AI) are exploding, and while many of us are excited for a world where our Roomba\u2019s evolve into Rosie\u2019s (\u00e1 la The Jetsons) &#8211; some of these technological advancements require grappling with ethical dilemmas. Determining how these AI technologies <em>should <\/em>make their decisions is a question that simply can&#8217;t be answered, and is best left to be debated by the spirits of John Stewart Mill and Immanual Kant. However, as a society, we are in dire need of a way to communicate ethics in a language that machines can understand &#8211; and this is exactly what Colin is developing.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Making <span style=\"text-decoration: underline\">A<\/span>n <span style=\"text-decoration: underline\">I<\/span>mpact: why coding computer ethics matters <\/h2>\n\n\n\n<p>A lot of AI is developed through machine learning &#8211; a process where software becomes more accurate without being explicitly told to do so. One example of this is through image recognition softwares. By feeding these algorithms with more and more photos of a cat &#8211; it will get better at recognizing what is and <em>isn\u2019t <\/em>a cat. <a href=\"https:\/\/www.nature.com\/articles\/d41586-019-03013-5\">However, these algorithms are not perfect.<\/a> How will the program treat a stuffed animal of a cat? How will it categorize the image of a cat on a t-shirt? When the stakes are low, like in image recognition, these errors may not matter as much. But for some technology being correct most of the time isn\u2019t sufficient. We would simply not accept a pace-maker that operates correctly <em>most of the time<\/em>, or a plane that doesn\u2019t crash into the mountains with just 95% certainty. Technologies that require a higher precision for safety also require a different approach to developing that software, and many applications of AI will require high safety standards &#8211; such as with self-driving cars or nursing robots. This means society is in need of a language to communicate with the AI in a way that it can understand ethics precisely, and with 100% accuracy.&nbsp;<br><em><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2015\/10\/trolley-problem-history-psychology-morality-driverless-cars\/409732\/\">The Trolley Problem<\/a> <\/em>is a famous ethical dilemma that asks: if you are driving a trolley and see that it is going to hit and kill five pedestrians, but you could pull a lever to reroute the trolley to instead hit and kill one pedestrian &#8211; would you do it? While it seems obvious that we want our self-driving cars to not hit pedestrians, what is less obvious is what the car should do when it doesn\u2019t have a choice but to hit and kill a pedestrian or to drive off a cliff killing the driver. Although Colin isn\u2019t tackling the impossible feat of solving these ethical dilemmas, he is developing the language we need to communicate ethics to AI with the accuracy that we can\u2019t achieve from machine learning. So who <em>does <\/em>decide how these robots will respond to ethical quandaries? While not part of Colin\u2019s research, he believes this is best left answered by the communities the technologies will serve.<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/2150\/files\/2022\/04\/colin_doing_logic-1.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/2150\/files\/2022\/04\/colin_doing_logic-1-1024x683.jpg\" alt=\"\" class=\"wp-image-2290\" width=\"465\" height=\"309\" \/><\/a><figcaption> Colin doing a logical proof on a whiteboard with a 1\/10 scale autonomous vehicle in the foreground. <\/figcaption><\/figure><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">The <span style=\"text-decoration: underline\">A<\/span>rch<span style=\"text-decoration: underline\">I<\/span>ve: a (brief) history of AI<\/h2>\n\n\n\n<p>AI had its first wave in the 70\u2019s, when it was thought that logic systems (a way of communicating directly with computers) would run AI. They also created perceptrons which try to mimic a neuron in a brain to put data into binary classes, but more importantly, has a <em>very cool name. <\/em>Perceptron! It sounds like a Spider-Man villain. However, logic and perceptrons turned out to not be particularly effective. There are a seemingly infinite number of possibilities and variables in the world, making it challenging to create a comprehensive code. Further, when AI has an incomprehensive code, it has the potential to enter a world it doesn\u2019t know could even exist &#8211; and then it EXPLODES! Kind of. It enters a state known as the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Principle_of_explosion#:~:text=In%20classical%20logic%2C%20intuitionistic%20logic,is%20the%20law%20according%20to\">Principle of Explosion<\/a>, where everything becomes true and chaos ensues. These challenges with using logic to develop AI led to the first \u201cAI winter\u201d. A highly relatable moment in history given the number of times I stop working and take a nap because a problem is too challenging.&nbsp;<\/p>\n\n\n\n<p>The second wave of AI blew up in the 80\u2019s\/90\u2019s with the development of machine learning methods and in the mid-2000\u2019s it really took off due to software that can handle matrix conversions rapidly. (And if that doesn\u2019t mean anything to you, that\u2019s okay. Just know that it basically means speedy complicated math could be achieved via computers). Additionally, high computational power means revisiting the first methods of the 70\u2019s, and could string perceptrons together to form a neural network &#8211; moving from binary categorization to complex recognition.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"text-decoration: underline\">A<\/span> b<span style=\"text-decoration: underline\">I<\/span>ography: Colin&#8217;s road to coding computer ethics<\/h2>\n\n\n\n<p>During his undergrad at Virginia Tech studying computer science, Colin ran into an ArachnId that left him bitten by a philosophy bug. This led to one of many philosophical dilemmas he\u2019d enjoy grappling with: whether to focus his studies on computer science or philosophy? And after reading I, Robot answered that question with a \u201cyes\u201d, finding a kindred spirit in the robopsychologist in the novel. This led to a future of combining computer science with philosophy and ethics: from his Master\u2019s program where he weaved computer science into his philosophy lab\u2019s research to his current project developing a language to communicate ethics to machines with his advisor <a href=\"http:\/\/www.houssamabbas.com\/\">Hassam Abbas<\/a>. However, throughout his journey, Colin has become less of a robopsychologist and more of a roboethicist.<\/p>\n\n\n\n<p>Want more information on coding computer ethics? Us too. Be sure to <a href=\"https:\/\/kbvrfm.orangemedianetwork.com\/\">listen live<\/a> on Sunday, April 17th at 7PM on 88.7FM, or download the <a href=\"https:\/\/share.transistor.fm\/s\/0f1a6900\" data-type=\"URL\" data-id=\"https:\/\/share.transistor.fm\/s\/0f1a6900\">podcast <\/a>if you missed it. Want to stay up to date with the world of roboethics? Find more from Colin at <a href=\"https:\/\/web.engr.oregonstate.edu\/~sheablyc\/\">https:\/\/web.engr.oregonstate.edu\/~sheablyc\/<\/a>.<\/p>\n\n\n\n<div class=\"wp-block-image is-style-rounded\"><figure class=\"aligncenter size-medium is-resized\"><a href=\"https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/2150\/files\/2022\/04\/colin.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/2150\/files\/2022\/04\/colin-300x200.jpg\" alt=\"\" class=\"wp-image-2288\" width=\"309\" height=\"199\" \/><\/a><figcaption>Colin Shea-Blymyer: PhD student of computer science and artificial intelligence at Oregon State University<\/figcaption><\/figure><\/div>\n\n\n\n<p class=\"has-text-align-center\"><em>This post was written by Bryan Lynn.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This week we have Colin Shea-Blymyer, a PhD student from OSU\u2019s new AI program in the departments of Electrical Engineering and Computer Science, joining us to talk about coding computer ethics. Advancements in artificial intelligence (AI) are exploding, and while many of us are excited for a world where our Roomba\u2019s evolve into Rosie\u2019s (\u00e1 [&hellip;]<\/p>\n","protected":false},"author":12105,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1305552,1305617,1231,1305546],"tags":[741204,1305618,1305620,1305619,155,523],"class_list":["post-2287","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-coding-ethics","category-computer-science","category-robotics","tag-artificial-intelligence","tag-coding-ethics","tag-computer-science","tag-logic","tag-oregon-state-university","tag-research"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/blogs.oregonstate.edu\/inspiration\/wp-json\/wp\/v2\/posts\/2287","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.oregonstate.edu\/inspiration\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.oregonstate.edu\/inspiration\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/inspiration\/wp-json\/wp\/v2\/users\/12105"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/inspiration\/wp-json\/wp\/v2\/comments?post=2287"}],"version-history":[{"count":4,"href":"https:\/\/blogs.oregonstate.edu\/inspiration\/wp-json\/wp\/v2\/posts\/2287\/revisions"}],"predecessor-version":[{"id":2323,"href":"https:\/\/blogs.oregonstate.edu\/inspiration\/wp-json\/wp\/v2\/posts\/2287\/revisions\/2323"}],"wp:attachment":[{"href":"https:\/\/blogs.oregonstate.edu\/inspiration\/wp-json\/wp\/v2\/media?parent=2287"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/inspiration\/wp-json\/wp\/v2\/categories?post=2287"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/inspiration\/wp-json\/wp\/v2\/tags?post=2287"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}