Over the course of a generation, algorithms have gone from mathematical abstractions to powerful mediators of daily life. They have made our lives more efficient, more entertaining, and, sometimes, better informed. At the same time, complex algorithms are increasingly violating the basic rights of individual citizens.
The AT&T Policy Forum hosted a virtual fireside chat, led by Ina Fried, Chief Technology Correspondent at AXIOS, with Michael Kearns and Aaron Roth, Professors of Computer and Information Science, University of Pennsylvania, and co-authors of The Ethical Algorithm: The Science of Socially Aware Algorithmic Design, to learn about their work on the front lines. The authors explained how we can better embed human principles into algorithms without halting the advance of data-driven scientific exploration.
Read Professor Kearns’ paper “Data Intimacy, Machine Learning, and Consumer Privacy.“
“What is the Ethical Algorithm and Can It Make the Internet Fairer?”
In an AT&T Policy Forum conversation moderated by Ina Fried of Axios, Michael Kearns and Aaron Roth, professors of computer and information science at the University of Pennsylvania and co-authors of The Ethical Algorithm: The Science of Socially Aware Algorithmic Design addressed these important questions.
In today’s world, machine learning from artificial intelligence (AI) is used to make decisions about increasingly important and personal aspects of our lives. Systems that used to predict weather and show advertisements can now help inform decisions about getting a loan or a job, admission to college, even eligibility for bail or parole. While humans remain the ultimate decision power for many of the most consequential decisions, the trend towards full automation is clear and society is further down this road than many people realize.
Professor Kearns argues that in order to address algorithmic bias, algorithms must be built in a more thoughtful way from the beginning. Algorithms are designed according to principles of computer science, with a focus on accuracy of result. While there may be no intent to promote discrimination or unfairness, without careful design, unintended consequences will happen unless algorithms also reflect privacy and fairness. The challenge is to take these abstract values of privacy and fairness and determine what additional parameters to include in algorithmic design so the outcome better reflects these values and to avoid unanticipated side effects.
For instance, an algorithm for facial recognition which prioritizes accuracy alone may have larger error rates for women and people of color. Broadly, the solution is to trade small gains in avoiding errors for larger gains in fairness by adding constraints to the algorithm – in other words, adjusting the “knob” of the algorithm to reflect a result other than perfect accuracy.
Different types of AI applications could result in different types of harms. For instance, for an algorithm seeking to measure chances of criminal recidivism, a false positive is the most harmful type of mistake; for one determining a loan application, a false negative is more harmful. There is, in these and other examples, an unavoidable tension between accuracy and fairness, but that is implicit in the increased use of machine learning and algorithms.
The scope of problems computer science alone can solve is narrow. Without stakeholder engagement in discussions about ethical design, algorithms may exacerbate existing inequalities. Of course, for critical social problems such as racial injustice, more holistic social and public policy reforms are needed.
For their part, policymakers should not become involved in specifying code – but they do have a role in understanding algorithms, recognizing that what appear to be the easiest fixes may have unintended consequences, and doing the hard work of thinking deeply about future regulation.
In the era of machine learning, inferences from data can be as powerful as data itself and raise serious questions of personal privacy. For example, a collection of “likes” across social media platforms can accurately predict intimate details about a person. In the algorithmic era, we may need to reexamine conceptions of “privacy” and ways to educate consumers.
An initial challenge is determining the types of algorithmic behavior that society wishes to avoid. This is a precursor to effective regulation.
Can an inanimate program – an algorithm – somehow be “ethical”? Yes – but it will take collaboration among data scientists, policymakers and other stakeholders in a conversation that balances predictive accuracy with social impact and, where necessary, preserves a role for humans as ultimate decision makers.