Imagine a couple of caffeine-addled biochemistry majors late at night in their dorm kitchen cooking up a new medicine that proves remarkably effective at soothing colds but inadvertently causes permanent behavioral changes. Those who ingest it become radically politicized and shout uncontrollably in casual conversation. Still, the concoction sells to billions of people. This sounds preposterous, because the FDA would never let such a drug reach the market.
Olaf J. Groth is founding CEO of Cambrian Labs and a professor at Hult Business School. Mark J. Nitzberg is executive director of the Center for Human-Compatible AI (CHAI) at UC Berkeley. Groth and Nitzberg are coauthors of Solomon’s Code: Humanity in a World of Thinking Machines (2018). Stuart J. Russell is a computer science professor at UC Berkeley, director of CHAI, and author of Human Compatible: AI and the Problem of Control (2019).
Yet this madness is happening everywhere online. Every day, we view streams of content custom-selected by simple software algorithms, some created in dorms, based on a technique called adaptive reinforcement learning. With each click, the algorithms learn to personalize the feed to their users’ tastes, thereby reaping profits for their owners. But the designers made a simple mistake: They assumed that human tastes are fixed. In reality, algorithms applied to malleable humans can have drastically different and pernicious side effects on a global scale. They modify our tastes to make us ever more predictable, edge us toward extremes, and ultimately erode civility and trust in society. It is time we stop blithely allowing this and create the digital equivalent of drug trials.
Intelligent systems at scale need regulation because they are an unprecedented force multiplier for the promotion of the interests of an individual or a group. For the first time in history, a single person can customize a message for billions and share it with them within a matter of days. A software engineer can create an army of AI-powered bots, each pretending to be a different person, promoting content on behalf of political or commercial interests. Unlike broadcast propaganda or direct marketing, this approach also uses the self-reinforcing qualities of the algorithm to learn what works best to persuade and nudge each individual.
Manipulating user preferences and using bot armies to leverage widespread deceit has disrupted societal cohesion and democratic processes. To protect the cognitive autonomy of individuals and the political health of society at large, we need to make the function and application of algorithms transparent, and the FDA provides a useful model.
The US Food and Drug Administration requires controlled testing on animals to establish safety, and then more testing on small populations to establish efficacy. Only then can a company offer a new drug to the masses. Software, by contrast, is typically subjected only to “unit tests,” to assure new lines of code perform as expected, and “integration tests,” to assure the updates don’t degrade the system’s performance. This is like checking a drug for contaminants without testing the effects of its active ingredients.
Of course, we cannot assemble a traditional FDA-style review panel to weed out deliberately false content in every article online. But we do already have tools for both platforms and users to detect falsehood and screen out dubious sources, including reputation systems, third-party raters, notary-like institutions, traceability, and, as is now the case in California, a regulation that requires bots to self-identify as such and makes it illegal to deploy bots that knowingly deceive others in an attempt to encourage a purchase or influence a vote.
social experiment by Livio Acerbo #greengroundit #wired https://www.wired.com/story/ai-algorithms-need-drug-trials