I recently came across two tweets—or rather, thousands of tweets sharing the same two ideas over and over again.

The first batch targeted elected officials and read: “SNAP has helped lift millions out of poverty, but millions of Americans living in poverty today still depend on it. Help make poverty history and reject proposals to cut SNAP funding in the upcoming Farm Bill! #SaveSNAP”

The second batch assailed a politician in a Texas Senate race: “I choose @tedcruz for #TXSen because unlike @betoorourke, I think refusing to stand for the national anthem is disrespectful to those who have sacrificed so much for America. #ChooseCruz #IStand”

All the tweets had identical content, including abundant hashtags and emoji. It’s easy to assume that these tweet salvos were the work of political bots spreading the kind of content mentioned during recent, high-profile congressional hearings about misinformation. But, in fact, these tweets originated from two legitimate organizations, one on the political right and one on the left.

Renee DiResta (@noUpside is an Ideas contributor for WIRED, the director of research at New Knowledge, and a Mozilla fellow on media, misinformation, and trust. She is affiliated with the Berkman-Klein Center at Harvard and the Data Science Institute at Columbia University.

The first barrage, which began in late August, was initiated by Global Citizen, an advocacy group fighting poverty. The second, first posted at 2 am Texas time Thursday morning, was initiated by a group sympathetic to Ted Cruz’s reelection campaign (it’s still unclear who) and it was pushed via a combination of an advocacy app and some of Twitter’s own ad tools.

They were amplified by a lot of real people—as well as a fair number of sketchy accounts.

Anyone who’s ever run a campaign—political, advocacy, or marketing—knows that getting attention is key to winning—and that getting attention is really hard. The days of simply mass-blast advertising on a handful of television networks ended a decade ago. Today there are a near-infinite number of channels, websites, and subreddits where people spend their time. Competition for attention here is fierce: There are a limited number of ad-placement opportunities in a given user’s social media feeds, and numerous advertisers are bidding on them. Then there’s the fact that saturated audiences don’t pay much attention to ads anymore. Rather, word of mouth from trusted friends appears to make much more of an impact.

All of this pushes candidates and activists to use any means at their disposal to reach potential voters across the digital ecosystem: Facebook, Twitter, YouTube, email, and even texting are now used to remind constituents to vote or to urge them to call their representative.

The art of clicktivism—the use of social media to organize, support, or promote a cause—isn’t new, of course. For close to a decade now, activists and political organizations have used technology to capitalize on social ties and trust by turning friends into messaging amplifiers: Click to automatically email your member of Congress; click to share this funny video ad with your Facebook friends. But around the time of the US presidential election in 2016, it became apparent that fake people were also participating in clicktivism. Bots pushing out barrages of tweets to game trending algorithms; pwned identities posting responses to public calls for comment; sock puppets signing petitions to generate initial lift and subsequently inspire real people. Suddenly, legitimate activists and bad actors (some foreign, some domestic) were all operating with the exact same playbook. This made solving an already-difficult problem—misinformation and election interference—far thornier.

Misinformation researchers look at a collection of criteria to attempt to differentiate authentic campaigns from influence operations. These include the content (is the same exact message being pumped out en masse?), the voice (do the accounts participating in the campaign appear to be authentic?), and the dissemination pattern (is there evidence of mass automation?). No one wants to inadvertently misidentify a real activist. The challenge is that tactics designed to reduce the friction of participating are also ripe for easy exploitation.

With the November 2018 midterms approaching, mass, coordinated action is picking up again. Hundreds of people simultaneously tweeting the exact same thing looks very much like the kind of automation used to manufacture consensus, dominate a hashtag, or game a trending algorithm. Organic outrage or excitement usually inspires a bit more variety in the commentary. And despite exhortations from politicians and mea culpas from technology executives, some of their latest campaigns and features are actually making things worse.

Twitter’s own Conversational Ads feature incentivizes click-to-tweet behavior that creates the same mass repetition of content that seems likely to trigger a spam flag. The output makes real, passionate, politically engaged users look like a botnet. This is compounded by the fact that actual manipulative accounts join the mix. Although this feature is part of Twitter’s official Ads offering, it’s unclear if the Ted Cruz #IStand click-to-tweet campaign was actually an ad or where it originated; despite commitments to ad transparency, the call-to-action tweets don’t include a “paid for by” notice or appear in Twitter’s ads-transparency tool. The resulting tweets don’t tie back to the original call to action in any way. Once the message is put out repeatedly by real people clicking on the card, it’s not hard for dubious accounts to join in or artificially amplify it.

When digital activism intersects with flawed algorithms, legitimate outreach and awareness campaigns can become tangled up with accounts that are there to spam and masquerade.

Bots and bad actors can opportunistically commandeer or hide inside legitimate clicktivism campaigns, making the line between activism and manipulation even blurrier. There’s certainly awareness of the potential for abuse. Facebook stopped allowing posts from apps like Thunderclap, which let real people temporarily “loan” their account to organizations to post and tweet on their behalf. But advocacy organizations still promote engagement by app. Global Citizen, which has 1.3 million Facebook followers, designed its app to incentivize users to click to share prefabricated tweets that tag political leaders and corporate Twitter accounts. Tweeting earns the user points, which can be redeemed in raffles for concert tickets and other swag. Unfortunately, looking at the hashtag on Twitter reveals that commercial spambots are also happily tweeting away, presumably earning points. It’s unclear how Twitter itself is treating these tweets; they come through an API and seem likely to trigger some of the new policies that Twitter has put in place to detect manipulative campaigns.

But more importantly, it’s also unclear how Twitter should treat these tweets. Political leaders have responded to the digital activism, quote-tweeting some of the prefab tweets to acknowledge the movement—and that’s the goal. A majority of these accounts are very likely real people expressing a real point of view; Global Citizen and Ted Cruz’s campaign are certainly not the first organizations to use these strategies.

The need to break through the noise to reach potential voters, the need to coordinate to reach political leaders, the need to generate awareness—these are real concerns that are paramount to campaigns and to our democratic process. But when digital activism intersects with the flawed algorithms that surface content on online platforms, legitimate outreach and awareness campaigns can become tangled up with accounts that are there to spam and masquerade.

There’s no clear line that divides legitimate clicktivism from more nefarious behavior—people have a right to tweet or share the things that inspire them. And mass, coordinated action run through an app isn’t necessarily less legitimate than real action that takes place offline. But platforms find themselves in the difficult position of deciding what’s real and what isn’t, which digital marketing tactics are OK and which are too easily exploited. Unfortunately, for the 2018 midterms—and the near future—internet users’ best tool to filter through the messages is their wits. Ask: Who is saying this? Why? And how is the message being spread? We’re all in the position of having to approach content online critically and skeptically while we wait for the platforms to come up with a solution.

More Great WIRED Stories