Renee DiResta (@noUpside) is an Ideas contributor for WIRED, the director of research at New Knowledge, and a Mozilla fellow on media, misinformation, and trust. She is affiliated with the Berkman-Klein Center at Harvard and the Data Science Institute at Columbia University.
Over in Amazon’s Oncology category, a book with a Best Seller label suggests juice as an alternative to chemotherapy. For the term “cancer” overall, coordinated review brigading appears to have ensured that “The Truth About Cancer,” a hodgepodge of claims about, among other things, government conspiracies, enjoys 1,684 reviews and front-page placement. A whopping 96 percent of the reviews are 5 stars—a measure that many Amazon customers use as a proxy for quality. However, a glance at Reviewmeta, a site that aims to help customers assess whether reviews are legitimate, suggests that over 1,000 may be suspicious in terms of time frame, language, and reviewer behavior.
Once relegated to tabloids and web forums, health misinformation and conspiracies have found a new megaphone in the curation engines that power massive platforms like Amazon, Facebook, and Google. Search, trending, and recommendation algorithms can be gamed to make fringe ideas appear mainstream. This is compounded by an asymmetry of passion that leads truther communities to create prolific amounts of content, resulting in a greater amount available for algorithms to serve up … and, it seems, resulting in real-world consequences.
A recent resurgence of measles outbreaks has the World Health Organization, the Centers for Disease Control and Prevention, and the US Congress questioning the impact of anti-vaccine misinformation on public health. Investigative journalists and academics have been examining one facet of this problem: what curation algorithms are doing to tamp down—or spread— health misinformation online. What they’re finding isn’t encouraging.
Curation algorithms are largely amoral. They’re engineered to show us things we are statistically likely to want to see, content that people similar to us have found engaging—even if it’s stuff that’s factually unreliable or potentially harmful. On social networks, these algorithms are optimized primarily to drive engagement. On Amazon, they’re intended to drive purchases. Amazon has several varieties of recommendation engine on each product page: “Customers also shopped for” suggestions are distinct from “customers who bought this item also bought”. There are “sponsored” products, which are essentially ads. And there’s “frequently bought together,” a feature that links products across categories (often very useful, occasionally somewhat disturbing). If you manage to leave the platform without purchasing anything, an email may follow a day later suggesting even more products.
Amazon shapes many of our consumption habits. It influences what millions of people buy, watch, read, and listen to each day. It’s the internet’s de facto product search engine—and because of the hundreds of millions of dollars that flow through the site daily, the incentive to game that search engine is high. Making it to the first page of results for a given product can be incredibly lucrative.
Unfortunately, many curation algorithms can be gamed in predictable ways, particularly when popularity is a key input. On Amazon, this often takes the form of dubious accounts coordinating to leave convincing positive (or negative) reviews. Sometimes sellers outright buy or otherwise incentivize review fraud; that’s a violation of Amazon’s terms of service, but enforcement is lax. Sometimes, as with the anti-vax movement and some alternative-health communities, large groups of true believers coordinate to catapult their preferred content into the first page of search results.
Amazon reviews appear to figure prominently in the company’s ranking algorithms. (The company will not confirm this.) Customers consider the number of stars and volume of reviews when deciding which products to buy; they’re seen as a proxy for quality. High ratings can lead to inadvertent free promotion: Amazon’s Prime Streaming video platform launched with a splash page that prominently featured Vaxxed, Andrew Wakefield’s movie devoted to the conspiracy theory that vaccines cause autism.
Perhaps compounding the problem, Amazon allows content creators to select their own categories and keywords. While writing this article, I experimented with the listing tool for Kindle books; the keywords and categories are entirely self-selected.
With a product base as large as Amazon’s, it’s probably a challenge for the company to undertake any kind of review process. This is likely why quackery shows up in classified as “Oncology” or “Chemistry.” It’s a small reminder that Amazon isn’t exactly a bookstore or library.
It’s a complex and thorny problem, and Amazon is suffering from it as well. There are concerns that tackling this problem could lead to censorship. But there’s a big gulf between outright removing the content, or refusing to sell books, and rethinking amplification and categorization. Amazon can start by doing better when it comes to recommending and categorizing pseudoscience that may have a significant impact on a person’s life (or on public health). In recent months, YouTube and Facebook have begun to shift their policies to address health misinformation and conspiratorial communities: YouTube has both demonetized and downranked the anti-vax content, and Facebook has made a statement implying that it’s likely to follow suit. Google, to its credit, has long had a policy for Search called “Your Money or Your Life,” which recognizes that when people are searching for information about highly impactful topics, it has a responsibility to hold those results to a higher standard of care. Users looking to buy health books on Amazon should be afforded the same standard.
No major platform is immune to problems with gameable algorithms. But Amazon in particular—with its massive audience and extraordinary revenue—is remarkable for how little it has changed despite numerous investigations of quackery and review manipulation over the years. It simply ignores the problem and waits for the next press cycle.
Amazon has recently taken incremental steps toward curbing health misinformation, but primarily only when under significant pressure. The company responded to a letter from US Representative Adam Schiff (D–California) by pulling a few anti-vaccine documentaries from Prime Streaming. That’s a good start, but it’s not enough. The real-world impact of health misinformation makes the stakes too high and too important to ignore. Amazon needs to recognize that its ranking and recommendation engines have far-reaching influence—and that a misinformation pandemic can induce a different kind of virality.
More Great WIRED Stories
social experiment by Livio Acerbo #greengroundit #wired https://www.wired.com/story/amazon-and-the-spread-of-health-misinformation