How Hurricane Milton and Helene conspiracy theories took over social media
A deluge of misinformation online about back-to-back hurricanes in the US has been fuelled by a social media universe that rewards engagement over truth.
The scale and speed of false rumours about Hurricane Helene and Hurricane Milton has been unlike many of the frenzies I’ve investigated online before.
Viral posts have ranged from seemingly innocuous questions about the legitimacy of forecasts and rescue efforts, to false claims – repeated by Donald Trump – that hurricane relief funds are being spent on migrants who entered the US illegally.
Others spread false images of the wreckage – faked pictures of children fleeing devastation that were generated by artificial intelligence (AI), old clips showing different storms or computer-generated (CGI) videos. And then there were those who shared false and evidence-free conspiracy theories about the government manipulating – or “geo-engineering”- the weather.
“Yes they can control the weather,” wrote Congresswoman Marjorie Taylor Greene last week on X.
Most of the viral misinformation has come from social media profiles which have blue ticks and a track-record for sharing conspiracy theories. Several accounts which spread Hurricane Milton misinformation this week had previously shared posts suggesting real-life events were staged or rigged, from elections to political violence, the pandemic and wars.
I messaged dozens of accounts which shared false and misleading posts on X related to both hurricanes. Their accounts seemed to be able to go viral precisely because of changes made at X since Elon Musk became owner. While the blue-check used to be given out only to people who had been verified and vetted, users are now allowed to purchase these ticks. The algorithm, in turn, gives their posts greater prominence. They can also then profit from sharing posts, regardless of whether they are true or not.
X’s revenue sharing policy means that blue-tick users can earn a share of revenue from the ads in their replies. On 9 October, the site announced that “payouts are increasing”, and accounts would now be paid based on engagement from other users who pay to get Premium membership, not the adverts in their responses.
This has incentivised some users to share whatever it is that will go viral – however untrue. Several of those I messaged acknowledged to me that they benefitted from getting engagement from their posts and sharing content they know will get attention.
It’s true, most social media companies allow users to make money from views. But YouTube, TikTok, Instagram and Facebook have guidelines which allow them to de-monetise or suspend profiles that post content that spreads misinformation, and say they label posts when they are misleading. X does not have guidelines on misinformation in the same way.
While it has rules against faked AI content and “Community Notes” to add context to posts, it removed a previous feature which allowed users to report misleading information.
X did not respond to the BBC’s request for comment.
Misleading posts which go viral on X can also travel over to the comment section of videos on other sites, too, showing how an idea shared on one site can spread through the social-media ecosystem.
“Wild Mother”, a social media influencer who regularly shares unproven theories across different sites, said that four years ago, her comments were filled with “people calling me names, denying it”.
“And now, I was surprised to see that nearly every comment is in agreement,” she said, referring to a recent post discussing conspiracy theories about geo-engineering and the recent hurricanes.
There is a real-world impact to this kind of disinformation, which can undermine trust in authorities – in this case – during a complex rescue and recovery operation following Hurricane Milton.
Although misinformation has always spread during natural disasters, there’s a crucial difference between now and previous storms. For one, the falsehoods being shared are spreading to more people – fewer than three dozen false or abusive posts were viewed 160 million times on X, according to the Institute of Strategic Dialogue think tank.
They have also taken on a sharper political edge because of the impending US presidential election.
Many of the most viral posts come from accounts which support Donald Trump, ISD found. And they are taking aim at foreign aid and migrants.
Several posts and videos have even targeted relief workers, who are accused of “treason” for taking part in untrue, outlandish plots.
The anger and distrust this can foster risks inhibiting efforts on the ground. Ahead of an election, it also risks undermining wider faith in how systems and government work, and of overshadowing any legitimate criticism of governments’ efforts.
While Wild Mother, and people like her, choose to view this as a sign that “more and more people are waking up to reality”, I see it as a sign that these conspiracy theories are gaining a wider audience.
She tells me how “a well informed collective is much harder to control”. In other words, the more people who believe these kinds of evidence-free conspiracy theories, the harder it is to combat them.
This ultimately comes down to the way the algorithms across social media sites favour engagement above all else. These conspiracy theories, false claims and hate can reach hundreds of thousands of people before anyone realises they’re are untrue – and those sharing them can be rewarded with views, likes, followers or money in return.
Listen to more of Marianna Spring’s analysis on Americast on BBC Sounds. She also investigates how social media is shaping the Presidential Election in BBC Radio 4’s Why Do Hate Me USA.
With additional reporting from Mike Wendling and BBC Verify