They will learn what we teach them.
Imagine ordering a car from a ride-sharing app on your smartphone and selecting a driverless option. As you approach a blind turn on a narrow street in your neighborhood, you come upon a stopped school bus. Children are crossing the road! On the sidewalk in the only place the car can swerve stands your golden retriever!
There’s no avoiding a crash and you brace for impact, but into…what?
Self-driving cars must be trained to make ethical decisions —just like humans know how to do, though we accomplish that feat through a murky blend of experience, morality and instinct.
We’ve reached a point where it’s of enormous consequence what data is used to train cars to make life and death decisions, what biases that data might contain and how the machine-learning algorithms which control those decisions are tuned.
How those types of questions are answered may turn out to be one of the most complex and meaningful debates in human history.
Far from being theoretical considerations for future generations, they’re here, right now.
Both Lyft and Uber are testing self-driving cars today and it’s up to data scientists to provide the first and perhaps most fundamental sets of answers to these questions and many more affecting everything from what ads you see, the news you see and hear, the friends you make, to whom you’re matched with when you date online, to what you read to the movies you watch.
Even questions of gender equality and social justice are increasingly mixed with an emerging field know as machine ethics.
For example, a 2015 Carnegie Mellon University study generated sets of fake online profiles for men and women and had those profiles automatically visit identical jobs sites with identical searches. The result?
Google suggested CEO jobs to the fake profiles for men five times as often as it did for women.
In many U.S. counties, computer algorithms are used to perform risk assessment to predict future violent behavior for criminal defendants. Some researchers have found these algorithms to be racially biased, yet they continue to be used by judges when considering sentences.
Other questions in the field are massive and potentially global.
As Elon Musk’s promised autonomous trucks roll out across the country, how do we deal with the sudden loss of millions of truck driving jobs?
If artificial intelligence and automation deliver on their promises to automate the future workforce, how will we deal with the threat of wealth inequality as the algorithms that perform work displace the workers given that the algorithms are owned and controlled by a wealthy few?
As machine algorithms are increasingly optimized to elicit a dopamine response in humans, how should we react?
Finally, there’s the question of what futurists call the singularity, which hypothesizes that a point will occur when artificial intelligence will propagate itself so rapidly that its advance may exceed humankind’s ability to control it.
Machines will learn what we teach them. Will we show them how to be just, fair, compassionate and empathetic or will we reveal our greed and avarice? Will our algorithms represent our best selves?
Ultimately, the soft underbelly of machine learning and artificial intelligence comes from the bias inherent in the data that we give it. We will be judged by the tools we build to serve ourselves, so perhaps it is time to ask ourselves as we birth these new machines—what will we see in ourselves? If we are Narcissus, staring into our technological reflection, what will we see staring back?
All important questions to consider as you take your next ride-sharing car home from the airport—driver or not.
Michael Place is a a writer and open-source software developer in Salt Lake City, Utah.
For more thoughts on this topic, visit the Salt Lake City Public Library on August 31, 7pm in the Nancy Tessman Auditorium for When Algorithms Decide Your Fate by University of Utah computer science professor Suresh Venkatasubramanian. This lecture is first in a series on the impact of computer science on our society and culture.