A few weeks ago, we were talking about whether companies like Facebook and Twitter should ban all paid political advertising from their platforms. Now the debate has narrowed to a secondary question: Where political ads are allowed, should their micro-targeting be prohibited? Google is the first to make this restriction formal policy: The company announced on Wednesday that it will “stop allowing highly targeted political ads on its platform” and limit the steering of such messages only to large interest categories. On Thursday, news broke that Facebook, too, may soon prevent “campaigns from targeting only very small groups of people.” These policy changes are designed to curb the negative consequences of the disinformation problem and legitimate political advertising—but they may end up doing little good.

For Mark Zuckerberg in particular, one bombshell has chased the last. Last month, he proclaimed at Georgetown University that his company would take a general stance against censorship in the political context—including over any content, false or true, disseminated by politicians as advertisements over his platforms. That laissez-faire approach to content moderation was met with absolute vilification by many technology experts and critics alike. A few days later, Twitter chief Jack Dorsey made an equally strong announcement, stating that his company was shuttering all political advertising over the platform, effective later this month—essentially, the diametrical opposite of Facebook’s stated position. On November 4, Zuckerberg acceded to meet over dinner with civil rights advocates who had serious concerns about the potential for uncensored political advertising to undermine the interests of marginalized American communities.

WIRED OPINION

ABOUT

Dipayan Ghosh (@ghoshd7) is the codirector of the Digital Platforms & Democracy Project and Shorenstein Fellow at the Harvard Kennedy School. He was a privacy & public policy adviser at Facebook and an Obama administration technology and economic policy adviser in the White House. He is author of the forthcoming Terms of Disservice.

The following morning, it came out that Zuckerberg was at least mulling over a revised approach: allow political advertising disseminated by politicians, but disable their capacity to engage in “micro-targeting,” the practice by which advertisers can splice user audiences on Facebook and subject each one to different advertising treatments. This is precisely the technique that President Donald Trump’s campaign director, Brad Parscale, took to the extreme when he managed Trump’s 2016 digital strategy. After Google announced just such a policy this week, Zuckerberg is under ever more pressure. Should he do something now, or wait and see if the public outcry and pressure on Facebook tides over?

Even if Facebook does follow in Google’s footsteps, the shift in policy would not be close to enough to protect next year’s US presidential elections from the sort of coordinated disinformation operations that struck our nation three years ago.

Why? The prevailing business model at companies like Facebook and Google simply won’t allow it. These digital platforms are premised on the uninhibited collection of personal data from users, the operation of compelling platforms that arbitrarily shut out competitive threats, and the development of AI that curates our social feeds. It is no longer a secret that Facebook and Google have a keen interest in maximizing user engagement on their platforms. This, in turn, contributes to the companies’ larger goals by unlocking more ad space that can be sold off to marketers, which lets Facebook and Google collect even more engagement data on their users, and subject them to even more content that will keep them scrolling through social feeds, search engine results pages, and digital maps. In order to achieve such maximal engagement, Facebook and Google must show us relevant content and advertising—including political ads.

Now consider the new rules (either proposed or implemented) against micro-targeting. Zuckerberg’s potential anti-targeting commitment would limit “the ability of candidates to target narrow groups of users.” This means that Parscale would no longer be able to pick and choose the classes of the American population that he wants to shower with his political messaging. But even under its new proposal, Facebook would maintain the right to direct such ads to whomever it pleases. This is critical, because it remains in Facebook’s commercial interests to do just that—that is, to use AI to determine the content and nature of a given political ad, and then to analyze which pockets of the American population would engage the most with that content. Facebook can even use its opaque suite of machine-learning mechanisms to fuel that quest. The company would analyze our data to infer the kinds of political content to which we would be most likely to react, and then it would route political advertising in a way that optimizes systemwide engagement. Indeed, if Facebook did not do just this—if it did not disseminate political ads in a sophisticated, algorithmic manner—then political advertisers would lose faith in the platform. They might even turn to other internet advertising exchanges and more traditional forms of media.