By: Rachael Andrews

Dr. Jason Anastasopoulos, assistant professor of political science and public administration, was recently awarded a Visiting Faculty Fellowship by the Edmond J. Safra Center for Ethics at Harvard University to study ethical issues involving governance with artificial intelligence (AI). Anastasopoulos plans to focus his courses on ethical issues surrounding the use of AI in public health and criminal justice.

At first glance, it may seem odd for a political scientist to study AI, but Anastasopoulos says that artificial intelligence affects many aspects of government. “AI is a very practical way for government workers to make decisions about individuals through the use of algorithms,” he explains.

The leading example of this is in the criminal justice system, where AI algorithms assign certain scores that enable bureaucrats (parole officers, judges, etc.) to make decisions about individuals in the system. These scores would allow bureaucrats to ascertain the relative risk that each individual has of committing another crime.

AI systems are often used by government workers, but they also operate in an interesting way in politics. “When you are watching YouTube or political news, you might get recommendations or posts based on your political preferences,” Anastasopoulos says. “Through those recommendations, you might become more left-leaning or more right-leaning, depending on how the AI systems make predictions based on what your preferences are.”

In addition, AI can be used for censorship, which Anastasopoulos says is difficult to study. AI systems or algorithms may be used to suppress certain types of topics or issues as a means of misinformation or to block information entirely.

As useful as AI may seem, there are ethical questions and biases that emerge when delegating the responsibility of decision-making to AI systems rather than human beings.

“If we delegate responsibility to these AI systems to allow them to govern us, there are many ethical issues, like racial biases in the systems to consider,” Anastasopoulos mentions. “There are also many ethical issues that we may not even know yet, that we could only discover through the continued use of AI.”

For example, in the criminal justice system, many studies show that AI systems may flag African-Americans as higher risk individuals at an increased rate than people of other racial groups, an identifier that may perpetuate already-existing discrimination.

There have also been technologies that use AI systems to detect physiological changes, such as a fever, which could indicate the contraction of certain diseases, including COVID-19. Public health officials have had policy discussions about using AI systems for contact tracing, detecting fevers, or predicting the probability that someone will contract the disease. However, the same concerns about biases arise in public health as they do in criminal justice. 

“If there was an AI that predicted if someone would contract COVID-19, that person would have to be quarantined,” Anastasopoulos describes. “But if those systems flag African-Americans or other groups as more likely to get sick, they could be further stigmatized.”

In the future, every person who works in government will likely have to deal with AI systems directly or will have to make decisions about using it, Anastasopoulos says. 

“People who have training in political science and public policy, as a basic part of how to analyze data, are going to have to understand what AI systems do,” Anastasopoulos concludes. “They will also need to understand the dangers they can have when you use them to make decisions and how to address those dangers.”