This fall, students at the Illinois Institute of Technology will be among the first in the country to have the option of pursuing an undergraduate degree in AI.
“AI is the future. We want to train a workforce that can tackle the challenges and opportunities of the future, which includes AI and machine learning,” said Aron Culotta, associate professor of computer science and director of Illinois Tech’s Bachelor of Science in Artificial Intelligence program.
Historically, AI has been taught at the graduate level because it was more of a research area rather than a core component of computer science. But as the field has matured, Illinois Tech decided it was time to offer an undergraduate degree course.
“We thought it was time to move some of these courses and concepts down to the undergraduate level so that when they graduate they will have both the traditional computational and design aspects as well as a good command of a number of these AI approaches,” said Culotta.
Graduates will be prepared to work across many sectors, including tech, medicine, finance, robotics, business intelligence, law and insurance.
One key component of the program will be to give students a thorough grounding in ethics. While many science fiction movies have explored the dangers of AI that turns against its makers, from Fritz Lang’s “Metropolis” to Stanley Kubrick’s “2001: A Space Odyssey,” Culotta said those threats are “pretty overblown.”
“There are many things to worry about before we worry about robots turning against us,” he said.
One issue that’s been discussed a lot in the field recently, he says, is that of bias and fairness.
“So if I have AI that is trying to predict recidivism -- the thing with machine learning is that you train (the AI) on historical data,” said Culotta. “And so that tends to lead to scenarios where algorithms either reproduce existing biases or perhaps make them worse just because of the nature of how they work.”
Because of that, Culotta said it’s important to instill into students the importance of transparency in their work and to create AI that can explain its reasoning to humans.
“To earn the trust of humans we need to create algorithms that report to the humans and describe, ‘This is the decision I am making and this is why,’” said Culotta. “And then the human can evaluate not only the decision but the reasoning to be able to make sure that the reasoning is valid in its context.”