Andrew Dassonville and airplane

Air travel can be made safer with artificial intelligence guarding against human error. That’s the vision of Andrew Dassonville, an engineering senior at Oregon State University, who recently took second place in a national airport design competition.  

Human error is the leading cause of commercial airline crashes and general aviation accidents, according to the Federal Aviation Administration. Dassonville, who studies computer science and robotics, zeroed in on radio communications as one source of human error where AI can provide a critical safety check.

Dassonville was awarded second place in the runway safety category at the 2022 ACRP University Design Competition, which challenges students to create innovative solutions for issues facing airports and the National Airspace System. The competition is sponsored by the Airport Cooperative Research Program, part of the National Academies of Sciences, Engineering, and Medicine’s Transportation Research Board.

In Dassonville’s design, an artificial intelligence-based system constantly “listens in” on radio exchanges between pilots and air traffic controllers, looking for discrepancies in communication, such as readback errors. Suppose, for example, a controller instructs aircraft ABC to climb and maintain 8,000 feet, but the pilot reads back 9,000 feet. The eavesdropping AI would catch the error and avert potential disaster.

“This system is capable of identifying that discrepancy and would alert the controller that the aircraft might not be doing what they’re expecting,” Dassonville said.

Dassonville, an avid pilot who discovered his passion for flying though the Oregon State Flying Club, saw the competition as a perfect overlap of his interests in aviation and computer science.

“As a pilot, safety is always on your mind, and you’re taking on some risk whenever you take off,” Dassonville said. “Being able to use my skills that I’ve learned at Oregon State through computer science in order to help mitigate risks in aviation is pretty cool.”

Kiri Wagstaff, associate research professor of computer science at Oregon State, advised Dassonville on the project.

“Andrew is an outstanding student and pilot,” Wagstaff said. “As a pilot myself, I’m very excited about Andrew’s concept, and I have thoroughly enjoyed discussing AI, flying adventures, and flight training with him.”

After graduating, Dassonville plans on a career that involves aviation.

“I’d love a career that combines computer science, robotics, and aviation,” he said. “It could be something that involves self-flying planes, autopilot technologies, or aviation instruments.”


Could artificial intelligence take over the world? The question captured the attention of the media this year when Bill Gates, Stephen Hawking and Elon Musk spoke publically about the dangers of artificial intelligence (AI).

Gates said he is “concerned about super intelligence,” Hawking warned that “the development of full artificial intelligence could spell the end of the human race,” and Musk described AI as “our biggest existential threat.”

Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence and a distinguished professor of computer science at Oregon State University, has been busy this year giving the academic perspective on the issue for articles, video and radio. He was the plenary speaker at Wait What? a future technology forum hosted by DARPA on September 9-11, 2015.

In an article by Digital Trends in February, Dietterich was tapped for his expertise on the topic:

Dietterich lists bugs, cyber-attacks and user interface issues as the three biggest risks of artificial intelligence — or any other software, for that matter. “Before we put computers in control of high-stakes decisions,” he says, “our software systems must be carefully validated to ensure that these problems do not arise.” It’s a matter of steady, stable progress with great attention to detail, rather than the “apocalyptic doomsday scenarios” that can so easily capture the imagination when discussing AI. Read more

In July, Dietterich was interviewed for NPR’s On Point “Managing the Artificial Intelligence Risk” during which he tackled a question from a caller who argued that robots should be programmed to love.

He responded, “My sense is that we should make a very clear distinction between robotic artificial intelligence and humans. I don’t think it’s appropriate to talk about a robot loving anything…Love is a relationship between people.”

When interviewer, Tom Ashbrook, pressed further, saying, “But if one day AI runs the world and does not recognize love…

Dietterich jumped in to say, “We will not let AI run the world… It’s a technology that should be used to enhance our humanity.”

You can listen to or download the entire show from the On Point website. Dietterich’s portion begins at minute 36.

Dietterich was also featured by Business Insider, Business Insider AustraliaFedScoop, Microsoft Research, PC MagazineTechInsiderU.S. Department of Defense; was filmed by Communications of the ACM and KEZI; and has been mentioned in articles by the Wall Street Journal, Tech Times, and The Corvallis Advocate.