Algorithmic power corrupts algorithmically.
Who should get a job interview? Who should get approved for a loan? Who should get matched up on a dating app? We cede these decisions to imperfect, sometimes biased chunks of code — often with no way to check an algorithm’s work, let alone its power.
In fact, the best algorithms are sufficiently sophisticated that programmers can’t always tell for certain how and why they reach the conclusions they do, Wharton professor Kartik Hosanagar writes in his important new book “A Human’s Guide to Machine Intelligence.” “A.I. scientists often have no way to know what’s going on under the hood,” he says.
Hosanagar calls himself a “net optimist” when it comes to the role of machine learning in our lives but fears that widespread complacency and ignorance on the subject is dangerous. “Having only a vague notion of how algorithms function is no longer sufficient for responsible citizens, consumers, and professionals,” he says.
His key insight in researching the book didn’t come from a computer scientist or even a political scientist, but from Bob, a museum guide at the National Constitution Center in Philadelphia.
Standing in Signers’ Hall, among the life-sized statues of the men responsible for the soaring ideals and sordid compromises that forged our nation, Hosanagar looked past Hamilton, Madison and Franklin and spotted three figures standing off to the side. Bob explained that these were the dissenters, who feared the new federal government would become too powerful. Fears that led, eventually to the Bill of Rights.
“Well, today the power is with the big corporations,” Hosanagar told Bob.
“Maybe we need a new Bill of Rights to deal with that,” Bob replied — and Hosanagar decided to take up the challenge.
Chief among the rights Hosanagar says should be codified is the right to know why algorithms decide what they decide. Individuals should be able to request and receive such an explanation, and firms should be required to fully audit their data, Hosanagar told MarketWatch. There should aso be some sort of regulatory “algorithm safety board” to provide oversight, he says.
“Transparency has a huge impact on whether people are going to be able to accept how these decisions are made,” he said. “Research suggests that we expect more transparency from AI than from humans — and that we are more willing to forgive human errors than algorithmic errors.”
The stakes are also higher: A doctor who makes bad decisions can impact perhaps thousands of patients, while a medical algorithm could harm millions, he says. The ability to audit the decisions, to create a sort of black box for algorithms, is therefore crucial for situations ranging from the role of algorithmic trading in the Flash Crash, to more literal crashes by self-driving cars or planes.
Users, according to Hosanagar, also should have a right to influence algorithmic performance through feedback.
“It can be as limited and straightforward as giving a Facebook
user the power to flag a news post as potentially false,” he said. “It can be as dramatic and significant as letting a passenger intervene when he is not satisfied with the choices a driverless car appears to be making.”
, and other tech companies be counted on to self-regulate here? No, said Hosanagar. “The danger is too much power will be concentrated in a few companies controlling the AI. On the other hand, overregulate and you risk stifling innovation.”
The only hope, he said, is that policy makers and the public get better educated on algorithms. His book is a worthy starting point.