By: Rachael Andrews

SPIA’s Dr. Andrew Whitford spoke at the Advanced Technology Academic Research Center (ATARC) Conference earlier this month. Around 300 federal employees attended the conference, and Dr. Whitford’s panel focused on responsible and ethical artificial intelligence (AI). Along with some public administration professionals, Dr. Whitford brought an academic perspective on ethical AI based on his research with other SPIA professors Jason Anastasopoulos and Micah Gell-Redman. 

When it comes to AI, government plays two roles –regulator and user. In fact, the federal government is the single largest purchaser of information technology in the world every year. The specific panel that Dr. Whitford spoke on covers both sides – what kind of uses will AI be part of in the private sector and how will government be involved; and under what circumstances will the government decide to use AI and for what purposes?

“You can think of AI as the next iteration of the use of prediction engines,” Whitford says, “so a simple example that Jason, Micah, and I have been working on is [that] there’s a long debate emerging on algorithmic bias, which is ‘are AI engines biased in how they make decisions?’”

An example of the type of decisions that AI can make is predicting who will commit crimes. Right now, humans make these decisions, typically via parole boards. The question now would be: is there an algorithm that can make those decisions without bias? However, there is evidence that algorithms tend to over-predict criminal activity in some racial groups. 

“The question then is: are algorithms simply embedding the bias that is already present in society or are they free of that bias?” Whitford explains, “there are different questions about that over time– the basic one is [about] how we build the algorithms. Some algorithms, we understand how their internal workings operate, in other algorithms, we have no idea.”

Whitford explains that some algorithms learn as they go, and humans have no way of knowing how they come to believe the things that they believe. These algorithms are called ‘black box systems,’ or closed systems. Because of this, it is impossible to go inside and figure out how a bias might emerge. Black boxes tend to do a better job at predicting, but there is no way for programmers or engineers to know why. 

But, even a non-biased algorithm could be used in irresponsible or unethical ways by government or private sector individuals, hence the need for oversight.

Whitford stresses that AI will become much more commonplace in the future. “Over time, we will use algorithms to predict all kinds of things. In the United States, you can already see this in facial recognition technology.” 

Machines learn faster than humans and over bigger data sets, i.e. an AI can read over a million cases, whereas a human couldn’t read a million cases in a lifetime. AI can make both government and private sector business much more efficient. 

Thus, “there’s no way to put the genie back in the bottle,” Whitford says. Now, practitioners have to figure out under which conditions governments and the private sector will use AI going forward. Will AI replace human employees? What kind of decisions should they make and how are those decisions reflected in good or bad outcomes? How do governments and businesses choose which algorithm to use, and do practitioners put limits on what they can or cannot do?

No argument is perhaps more well-suited to ponder these questions than that of autonomous weapons, or drones that fly themselves and make decisions on when to shoot. “Ethicists have been working on this problem for years. [It’s known as] the trolley problem,” Whitford asserts. For instance, with autonomous vehicles, at what point does the vehicle drive off a cliff, thereby killing its own driver to avoid hitting, and perhaps killing, innocent bystanders? 

Whitford says that a common misconception is the idea that AI is science fiction. He refers to a quote by noted science fiction author, William Gibson, “the future is already here, it’s just not widely distributed.” Industries are already making use of AI, so now practitioners must decide how to use and regulate it.

Another misconception is that AI will replace humans, Whitford says. “ [In the] long-term, we just don’t know. Robotics has taken over a lot of manufacturing jobs, but most likely, humans will work alongside AI, [and] AI will probably just accentuate the problems we already have.” Whitford argues that AI will make the same kinds of judgements but at a greater capacity than humans can. “The biases that humans already have will just be supercharged because they have a greater capacity to make judgements.” For example, Twitter bots are able to harass users with a greater capacity because there are machines behind the accounts (hence the term, “bots”).

For public administrators, Whitford says that, “We should understand that having a single ethical system is impossible. Doctors have different ethical systems than do engineers. The application settings of AI will matter a great deal.”

Furthermore, Whitford insists, this is just the beginning of AI and its uses. “We tend to pay attention to these questions when they’re problems and when bad things happen.” Conversations about the future of AI are important to consider before those bad things happen.