On Wednesday, Sens. Ron Wyden and Cory Booker and Rep. Yvette Clarke introduced the Algorithmic Accountability Act, indicating policymakers’ increasing concern that artificial intelligence is magnifying human bias in tools such as facial recognition, self-driving cars, customer service, marketing, and content moderation.
While A.I. has incredible potential to improve our lives, the truth is that it is only capable of reflecting our societal problems right back at us. And because of that, we can’t trust it to make important decisions that are susceptible to human prejudice.
Even the most enlightened of humans have deep-seated biases. Difficult to identify, they are even harder to correct. Today’s A.I. learns by encoding patterns from the data that it feeds on. If you build an A.I. system designed to identify who is going to be a future convict, for example, the only data you can rely on is past data. Since the percentage of blacks in prison is higher and the percentage of whites in prison is lower than their respective shares of the U.S. population, a naive A.I. system will infer that a black person is more likely than a white person to commit a crime.
Such a system is unable to take into account all of the systemic biases that have ensured blacks’ relatively higher incarceration rates. And we currently don’t have data to train A.I. systems other than data that, though superficially objective, inherently expresses societal norms and biases.
Finding better data will be exceptionally hard. Even if we programmed an A.I. system to ignore race and use different measures when predicting future criminality, the results would likely come out the same. Consider the other attributes convicts might share: living in particular neighborhoods, coming from single-parent families, or not graduating from high school. All of these categories would essentially act as proxies for race, since machine learning cannot account for all their different interlinkages.
In this way, the current generation of artificial intelligence is smart like a savant, but has nothing close to the discriminating intelligence of a human.
A.I. shines in performing tasks that match patterns in order to obtain objective outcomes. Playing Go, driving a car on a street, and identifying a cancer lesion in a mammogram are excellent examples of narrow A.I. These systems can be incredibly helpful extensions of how humans work and are already surpassing us in discrete parts of jobs. A tumor is a tumor, regardless of whether it is in the body of an Asian or Caucasian. Able to base their judgements on objectively measurable data, these systems are readily correctible if and when interpretations of those data are subject to overhaul.