“There can be no liberty for a community which lacks the information by which to detect lies.”
– Walter Lippmann, Liberty and the News, 1920
The Algorithms
Everyone knows that the infamous social media algorithms promote the spread of false and misleading information on the Internet. But could they be engineered to do the opposite?
We can’t prevent people from lying. But we can’t on the one hand blame the algorithms for the spread of misinformation, and on the other hand blame everything on human nature. We know that the algorithms somehow change the nature of online discourse for the worse. So if they can be accidentally engineered to promote lies, couldn’t they be intentionally engineered to promote truth?
Moderation is Not the Answer
“A moral monopoly is the antithesis of a marketplace of ideas.”
– Thomas Sowell
I don’t suggest simply training the algorithms to identify and delete misinformation. This is just computer-assisted moderation (or censorship). An artificially-intelligent moderator will reflect the beliefs and values of its creators, making the people who own a social platform the ultimate arbiters of truth and morality. So the algorithms should be neutral, holding no opinion about what is true, or what is right.
Instead, we need to fix the problems at the root by creating algorithms that give truth the advantage: that create an environment where accurate information tends to spread more easily than lies.
But how can technology give the advantage to truth without having an opinion about what is true?
In the same way that today’s social networks give the advantage to falsehood without having an opinion about what is false. By creating a feedback loop.
The Law of Attention
“What information consumes is rather obvious: it consumes the attention of its recipients.”
– Herbert Simon
As I argue in The Law Of Attention, all online communities will be dominated by the behaviors that are rewarded with attention. Even if you are not particularly motivated by attention, obviously if nobody pays any attention to you on social media, you will stop posting.
When social media algorithms optimize for engagement, they reward engaging content with attention. Unfortunately, outrageous lies and other controversial content is often the most engaging1. Some people learn how to play the game by sparking conflict and evoking fear and outrage. Others stop posting.
So the algorithms don’t just direct attention, they actually influence the behavior that dominates in an online platform. And if social platforms allocated attention based on different metrics, they would create different feedback loops that cause different behaviors to dominate.
But what other types of feedback loops can be created? What kind of behaviors can be promoted?
The Honest and Informed Opinion
“You are not entitled to your opinion. You are entitled to your informed opinion. No one is entitled to be ignorant.”
– Harlan Ellison
In Truthtelling Games, I show how game theory can be used to establish an equilibrium at honest behavior: a situation where rational individuals are better off revealing what they honestly think is true, because other people are doing the same. In The Deliberative Poll, I propose a method for encouraging productive discussion and identifying the most informed opinions.
A combination of a Truthtelling Game and a Deliberative Poll can result in a virtuous feedback loop where people win attention by promoting honest and informed opinions.
The Marketplace of Ideas
“If we do not have the capacity to distinguish what’s true from what’s false, then by definition the marketplace of ideas doesn’t work. And by definition our democracy doesn’t work.”
– Barack Obama, 2020
The honest and informed opinion of a group of people on the Internet is not guaranteed to be the truth. But I think it can bring us closer to the ideal of a free marketplace of ideas, where for an idea to succeed, its supporters must defend it with reasons that people honestly believe are good ones.
Today, the marketplace of ideas is held in public forums on the Internet, but the rules that govern the marketplace are broken. But we don’t have to throw up our hands and accept this as an inevitable consequence of technology, or of human nature.
If the platforms were designed differently, they could fulfill the original promise of the Internet: to democratize knowledge. To provide society with a tool for collectively discovering information, making sense of that information, and realizing humanity’s potential for collective intelligence.
On social-protocols.org, we post about our work on deliberative consensus protocols and other social protocols for improving conversations on the internet. Currently, we are developing the global brain algorithm, which we are integrating into a prototype of a new social network. The Global Brain algorithm analyzes the upvotes and downvotes in a threaded conversation, depending on who has seen what other comments before they voted, in order to identify the comments that are most influential and to determine the informed opinion of users who have seen all the most influential comments.
Originally posted on social-protocols.org.