Social media are doomsday machines. They distract, divide, and madden; we can no longer hear each other, speak coherently, or even think. As a result, our social, civic, and political ligands are dissolving.

Everywhere, people consult their screens to affirm what they already think and repeat what like-minded people have already said. They submit to surveillance and welcome algorithmic manipulation. Believing absurdities, they commit injustices. A few lose their minds altogether. We’ve done a number on ourselves. Everyone knows it, even technology’s salesmen. Tim Cook, Apple’s chief executive, ruefully told a privacy conference last October that platforms and algorithms, which many hoped would enlarge the best in humanity, had liberated the worst.

Jason Pontin (@jason_pontin) is an Ideas contributor for WIRED. He was formerly the editor in chief and publisher of MIT Technology Review; before that he was the editor of Red Herring. Now he is a senior partner at Flagship Pioneering, a firm in Boston that funds companies that solve problems in health, food, and sustainability. Pontin does not write about Flagship’s portfolio companies nor about their competitors.

I know it now too. I see the lies and self-love, cruelty and credulousness, and can trace their causes. But for a long time, I was a free-speech maximalist—someone who believed humanity needed as much free speech it could bear. I believed in a marketplace of ideas, where bad ideas would be defeated and good ideas refined. In this, I was conventionally liberal. My inspiration was John Stuart Mill and his “philosophic text-book of a single truth,” On Liberty (1859). That truth, now called simply “Mill’s harm principle,” states: “The only purpose for which power can be rightfully exercised over any member of a civilized community is to prevent harm to others.”

I once invoked the ghost of Mill in an essay, “Free Speech in the Era of its Technological Amplification,” to argue that the harm principle could not only define the legal limits to speech in nations that protect free expression (briefly: American law protects advocacy of illegal action up to the point where a serious crime is “imminent” and “likely”), but also guide the terms of service of internet companies, which can censor any speech they choose. Harm, I confidently declared, should be the sole standard that internet companies use to decide what to ban, where “harm” meant physical and some commercial injuries, but excluded personal, religious, or ideological offense. I was wrong.

I thought nothing very bad could happen when men and women said what they wished. It’s hard to believe that today. In a catalog wearying to relate, in just 72 hours last month we learned that Cesar Sayoc was radicalized on Facebook and threatened others on Twitter, before he sent pipe bombs to more than a dozen of President Trump’s critics; that Robert Bowers shared conspiracy theories and anti-Semitic messages on Gab, before he allegedly killed 11 people and injured six others; and that Jair Bolsonaro, a right-wing Brazilian politician best known for his pullulating hatreds (for homosexuals, Afro-Brazilians, women—pluralism itself), waged a campaign of disinformation on WhatsApp, before he won election as president. Perhaps all three, even Bolsonaro, were nuts before social media; but social media licensed their malevolence in different ways.

Last month’s outrages deepened the general mood of dismay about the mechanics and influence of social media. No one is proud of their online addictions, but those addictions now seem consequential. Increasingly, research supports the intuition that hateful speech leads to hateful acts. In reaction, some of those who know social media best have rejected the liberal tradition, and suggest that since we cannot regulate our behaviors, the companies that incite and reward bad behavior should be better regulated.

Writing in WIRED magazine last January, Zeynep Tufecki, a social scientist who studies the effects of emerging technologies, observed “John Stuart Mill’s notion that a ‘marketplace of ideas’ will elevate the truth is flatly belied by the virality of fake news,” and demanded “political decisions” with “huge trade-offs.” My colleague at WIRED Ideas, Renee DiResta, the director of research at New Knowledge, which protects companies from social media disinformation attacks, reminded “pundits and politicians howling about censorship” that “free speech does not mean free reach,” and insisted we “hold tech companies accountable … and demand transparency about into how their algorithms and moderation work.”

Unenthusiastic about regulation, and in response to their customers’ demands for more genial online experiences, internet companies have promised to change. Last January, Facebook agreed to a voluntary “code of conduct” with the European Union, where it would remove “hate speech” within 24 hours. Globally, the social network removed 280,000 posts that violated its hate speech standards for every month of 2017. Twitter, for its part, renounced its claim to be “the free speech wing of the free speech party.” Speaking to British parliamentarians earlier this year, Sinead McSweeney, the company’s vice president of public policy for Europe, the Middle East, and Africa, testified that a “John Stuart Mill-style philosophy,” was inadequate to the times: “It’s no longer possible to stand up for all speech in the hopes society will become a better place because racism will be challenged, or homophobia … or extremism will be challenged. We … have to take steps to limit the visibility of hateful symbols, to ban people from the platform who affiliate with violent groups.” In September, Twitter adopted new guidelines prohibiting “dehumanizing speech.” The maxim is now: “Be sweet when you tweet.”

In October, I became part of the debate about speech when Breitbart News published a leaked Google memo, where I was cited as first among “leading thinkers in this space.” The far-right site offered the 85-page document as proof that Google’s algorithms suppressed conservative voices, a common suspicion on the right. (Trump has tweeted “Google & others are… hiding information and news that is good. They are controlling what we can & cannot see.”) The leak was taken up by other right-wing sites and Russian state media: “Google now plays ‘Good Censor’ for civility’s sake, leaked internal briefing confirms,” RT.com shrilled.

In reality, my involvement was minor: a British research firm, hired by Google, interviewed me a year ago. At a time when the search giant was wrestling with the launch of a heavily censored search engine and news site in China, codenamed Dragonfly, amid internal dissent and the resignation and recriminations of a senior researcher, when it was struggling to moderate the videos and comments on YouTube, I helped the company think through how its algorithms amplified or muted what was said and heard. If Mill’s vision of a marketplace of ideas was utopian, Google didn’t want its code to contribute to a digital dystopia, in China or elsewhere.

The memo’s eight recommendations were rational and modest (although Google was quick to clarify they were not corporate policy, adding: “Google is committed to free expression—supporting the free flow of ideas is core to our mission.”) The company should be more consistent: “Don’t take sides,”“police tone instead of content.” More transparent: “Enforce standards and policies clearly,”“explain the technology.” More responsive: “Improve communications,” “take problems seriously.” And it should be more “empowering:” Provide “positive guidelines” and “better signposts.”

I could cheerfully support the suggestions, because they promoted impartiality. Mostly, they reflect the emerging consensus about speech among the internet companies. But like a Victorian without his God, I felt empty and anxious without a defensible principle: All I could hear was liberalism’s melancholy, long, retreating roar. I found a new principle in the writing of the philosopher Justin Khoo, a young professor at MIT, who recommends what he calls “discursive intolerance” as a way of rescuing Mill’s marketplace of ideas when the market throngs with scoundrels selling false goods and making debased change, and anarchists labor in cellars on bombs that will explode the marketplace completely.

Khoo writes that “we must figure out how to get along despite our disagreements.” (It’s a theme I wrote about last May: “Four Rules for Learning How to Talk to One Another Again.”) In particular, he asks, whether and how much we should tolerate views we dislike. In answering this question, Khoo also invokes Mill: “Given our fallibility, Mill thought that a well-functioning discursive marketplace of ideas—that is, a public forum where a diversity of views are evaluated on the basis of reason and evidence—was our best chance for knowledge and flourishing.” But Khoo, like me, thinks Mill subtly wrong: Tolerance, he argues, is not always the best way to secure a discursive marketplace. In the real world, we must sometimes impede the spread of opinions “without rationally engaging with them.”

To explain, Khoo provides an example, worth quoting at length: “Suppose there is a widely endorsed view, call it X, which states that members of certain groups (the X-targeted) are inferior, unknowledgeable, untrustworthy, or biased on certain issues. Suppose further that belief in X is so entrenched that rational arguments against X will fail for reasons having nothing to do with the epistemic merits of X itself. In such a case, the prevalence of X-beliefs in the community threatens its discursive marketplace: The voices of the X-targeted will be effectively silenced … and their views will not be evaluated on their merits. In this case, given the aim of securing a well-functioning marketplace of ideas, we have reason to be discursively intolerant toward X and X-believers. One might aim to create practical incentives not to advocate X.”

There are problems with discursive intolerance. In conversation, Khoo happily lists the difficulties. His idea is ambiguous: If “first-order” expressions of a belief are unbearable, can we allow “second-order” debates about whether a belief is so awful that it cannot be debated at all? It is risky: If believers in a very widely held view experience discursive intolerance, won’t they rebel, perhaps by exiting a community entirely? Most crucially, his idea is not a very reliable guide. How can fallible humans determine whether discursive intolerance is the appropriate response to a disagreement, except by expanding “harm” to include damage to the very abstract conception of the marketplace of ideas itself? “Harm,” unless put in a small, airless box, will tend to grow to mean anything contrary to the interests of the powerful. “I don’t have a worked-out view,” Khoo admits. “We can never be 100% certain. There is no neutral position.”

Applied to the contemporary scene, discursive intolerance has a more general problem. Internet platforms were not designed to be truth machines, and there’s no evidence that users want them to perform that function. But for all its problems, applied parsimoniously, discursive intolerance could define the responsibilities of online companies to their users, shareholders, and society at large. Just as the only political organization democracies cannot tolerate are antidemocratic parties, so, perhaps, the only speech that is unacceptable on social media are expressions meant to undermine the basic functions of the platforms themselves, including hate speech intended to drive out other users. Interestingly, Mill himself anticipated one aspect of Khoo’s argument: He thought insincerity—trolling—an unethical form of speech that no one was obliged to respect.

Discursive intolerance and a broadly rediscovered sobriety of conduct might have the additional benefit of reducing the supply of speech. Everyone is too loud; the volume is deafening. The old me, echoing Louis Brandeis, thought the antidote for falsities was “more speech,” but I now know that more speech achieves nothing when there is a surfeit of talk. Recently, I deleted 60,000 tweets dating back to 2007, when I wrote the first story in The New York Times about Twitter. I had earlier forsworn Facebook. I abandoned social media as my newsfeed. I gave up correcting people’s errors. I stopped showing off. Afterwards, I felt steadier, more orderly.

I don’t want speech to be less free, exactly. I want less speech absolutely and I want what is said to be less destructive. Less speech is more. Less speech, more coolly expressed, is what we all need right now—a little less goddamn talk altogether.


More Great WIRED Stories