Last year I watched a very thought-provoking documentary series called Pandora’s Box: A Fable from the Age of Science. This six-part series, originally broadcast by the BBC in 1992, examines the consequences of political and technocratic rationalism. The first episode focused on how the Soviet Union used rational scientific methods to plan their economy and society. Scientists in the early Soviet era believed that they could teach citizens to work and behave in a rational manner. Embracing the view that society could function like a machine, engineers created a rational plan to dictate what every single factory in the country would produce. Strict adherence to this plan lead to absurd outcomes as society tried to meet the rational benchmarks and quotas they were assigned. The episode concludes with the original grand plan having devolved to nothing more than pointless and elaborate ritual.
Many of the game-changing technologies that were brought up in the class discussion rely on AI technology. Large quantities of data and complex algorithms are used to create newer improved models. The limitless potential and grand promises of these AI-powered solutions reminded of the documentary. The lesson I got from the movie was not to completely abandon rational solutions. I think it is important to be cautious of overly relying on AI to solve our problems. AI can help to uncover our blind spots, but it can also lead to them being further entrenched. Amazon came under fire a few years ago when their experimental employee recruitment tool was found to be discriminatory. In this case, the data that the model was trained on was based on the company’s current workforce, so the tool was looking to maintain the status quo. Creating a model that accounts for every possible factor is very difficult. Amazon abandoned its tool because even after remedying the original flaw, its engineers could never be completely certain that more biases could be uncovered in the future.
The trolley dilemma is often brought up when talking about the ethics of autonomous vehicles. Taking life and death decisions out of human hands is very dangerous. Another episode of the documentary focused on the United State’s use of game theory and systems analysis in its nuclear strategy during the Cold War. The documentary uses the Cuban Missile Crisis as an example where following the rational suggestions of analysts would have been the wrong move. The president’s threat of mutually assured destruction was considered irrational by the analysts, but it ultimately led to the end of the crisis.
I think some of the high expectations people have of AI solutions should be tempered a bit. AI, by virtue of being created by humans, can never truly eradicate human error. Its ability to reduce human error can bring significant improvements to society and I look forward to seeing how this technology is further explored and deployed.