top of page

Social Media's Disinformation Detox

  • Writer: Xuhong L.
    Xuhong L.
  • Aug 5
  • 8 min read

Cover image credit: Stacey Ngiam


Disinformation is believed to have first gained prominence in political discourse through a Russian influence campaign targeting the 2016 U.S. Presidential Election. Viewed as a testing ground for subsequent operations, the campaign targeted users across major social media platforms in attempts to spread disinformation across both the political left and right. While its efficacy remains hotly contested till date, it is certain that the Russians wasted no time in gleaning invaluable insights from the campaign itself. 


As the digital natives of generation Z reach voting age, social media has emerged as a crucial campaign front in elections around the world. These platforms greatly increase the accessibility of political campaigns as voters are able to engage more directly with candidates without the need to attend physical rallies. 


Meanwhile, candidates can also push tailored messages to specific voter demographics in the form of targeted political advertising. Donald Trump himself took to Twitter (and now, Truth Social) on a near-daily basis to express his thoughts on just about anything on his mind. Since Trump’s 2nd term began this year, he has doubled down on social media use - over the first 100 days, Trump has published more than 1600 posts, or around 15 times per day on average.   


Combined with rising distrust in traditional media outlets, youths have overwhelmingly turned to social media sites for news updates that are perceived to be more authentic. Unsurprisingly, candidates who are able to project a relatable image and foster authentic connections with the youth segment of their electorate on social media may achieve a better performance than incumbents with a weaker social media presence. 


For instance, YouTube personality Fidias Panayiotou secured a record-breaking 71,330 votes and was elected in the 2024 European Parliament elections despite campaigning as an independent candidate without any major party affiliation. Similarly, consultancy firm Mitton found that TikTok “significantly influenced” the 2023 spring elections in Finland where many young voters favoured the nationalistic-leaning Finns Party which had a strong TikTok presence. 


However, the indispensable role that social media serves in election campaigning has not gone unnoticed by threat actors either. The popularity and accessibility of social media platforms has created a fertile ground for both state and non-state actors to conduct influence operations aimed at subtly reshaping people’s beliefs and behaviour and further their own agendas. 


Since the 2016 Russian influence campaign, evidence of electoral interference has emerged in almost every major election across all corners of Europe, posing a formidable challenge for democracy in the 21st century. When left unchecked, malign operations that compromise the free formation of voters’ opinions (e.g. disseminating falsehoods regarding candidates) can undermine the very core of democracy. After all, voters’ trust in democratic processes, once lost, is never easily regained.  


But first, let’s examine some key strategies that threat actors employ to conduct influence operations on social media.  

  

The Rise of Inauthentic Behaviour


For the ease of analysis, we will be covering 2 broad (but non-exhaustive) categories of influence operations in this section: information manipulation and information pollution. 


Information manipulation (for instance, the spread of disinformation) refers to the process by which adversaries distort or falsify existing narratives in a manner that aligns more closely with their own agendas. Contrary to tightly regulated access to paid political advertising across traditional media platforms such as the television and radio, relatively few legal guidelines apply to publishing political ads on social media. 


Even when disclosure rules are in place, they can be easily circumvented through the establishment of shell entities designed to obscure the origin of funds. Besides, investigations by The Insider revealed that slickly-designed posts and short-form videos by popular bloggers can be bought for just over €100 each. In other words, coordinated advertising campaigns can be initiated by just about anyone with a bit of spare change - and the Kremlin allegedly has a billion-dollar budget to do just that. 


This worrying trend has repeated itself in the Romanian elections last year, in which pro-Russian candidate Călin Georgescu’s victory was annulled over intelligence suggesting that his win was the result of a well-funded Kremlin campaign involving thousands of Tiktok accounts that lifted the candidate from relative obscurity. 


As the dust settles, TikTok now faces enhanced scrutiny in the form of an EU investigation over its policies on political advertisements and algorithmic recommendations (paid political advertising is technically banned on TikTok). 


Meanwhile, malicious ad campaigns may also direct users to fabricated news sites that closely mimic the real deal. One such example is NewsFront, a Crimea-based site that supposedly offers objective coverage to a Europe-based audience published in 10 different languages. In reality, it routinely echoes Moscow’s talking points to justify its invasion of Ukraine while masquerading as a legitimate news outlet. 



The Kremlin has been found to behind inauthentic news sites that distribute Russian propaganda. Image credit: The New York Times
The Kremlin has been found to behind inauthentic news sites that distribute Russian propaganda. Image credit: The New York Times

Another worrying implication stems from ongoing disputes between social media platforms and legacy news outlets. Meta, for instance, elected to wind down its Facebook News feature in early 2024 across multiple jurisdictions and terminate fee agreements amidst regulatory pressure from both Australia and Canada that compelled it to pay fairer (read: higher) content fees. 


However, Facebook’s decision to deplatform news articles from reputable outlets for Canadian users in response to Bill C-18 has only worsened the quality of information available as a torrent of ads touting stories of dubious veracity has filled this gap, worsening information integrity in the run-up to the 2025 Canadian elections. 


Moreover, ads may also form part of a broader harassment campaign targeting individuals or organisations that challenge the influence operation itself, such as fact-checkers and investigative journalists. By discrediting figures that attempt to identify and rectify falsehoods, influence actors enjoy a stronger say in challenging the dominant (mainstream) narrative.  


Next, another tool in the threat actor’s arsenal is information pollution. This refers to unleashing an overwhelming amount of content within a relatively brief span of time with the aim of sowing confusion and diverting attention away from more reliable sources of information. 


A textbook example would involve tens of thousands of inauthentic accounts controlled by bot farms flooding the platform with posts and comments in hopes of saturating the conversation. When encountering such falsehoods on a high-frequency basis, users are rendered more susceptible to influence as they struggle to identify fact from fiction. Click farms have also sprouted mainly in developing nations, utilising cheap labour to generate false clicks on advertisements.  


Fictional representation of fake engagement on social media. Image credit: Stacey Ngiam
Fictional representation of fake engagement on social media. Image credit: Stacey Ngiam

Information pollutants also abuse social media algorithms to their advantage. The same bot accounts are often also used to expand the reach of counternarratives by resharing large amounts of content. By nudging legitimate users that encounter such content to engage with it (e.g through commenting and resharing), this will drive up exposure as social media algorithms typically reward engaging content by showcasing it more prominently to a larger pool of users. 


Brussels Leads the Way  


In response to the growing threat of foreign influence operations, countries have increasingly recognised the need to regulate the digital sphere. The European Union has been a pioneer in this regard, spearheading a digital strategy that sets out a strict set of obligations that major social media platforms must abide by with regards to maintaining information integrity online.


As previously covered, the EU’s Digital Services Act (DSA) is a landmark piece of legislation that emphasises the need for improving the transparency of information hosted on selected very large online platforms “VLOPs” (that encompass all major social media platforms). It spells out a series of obligations that platforms must abide by, failing which a fine of up to 6% global revenue may be assessed.


For instance, articles 34/5 recognises that content with “negative effects on civic discourse and electoral processes” contributes to “systemic risk” and therefore must be mitigated by the platform in question. In other words, the laissez-faire status quo must end under the DSA.


More recently, the EU's Code of Practice on Disinformation (CoP) has been incorporated into the DSA as a legally-binding provision under article 35 since July 1, 2025. First drafted as a self-regulatory framework in 2018, recent updates saw a total of 44 commitments in areas such as demonetisation of disinformation, transparency of political advertising and enhancing the integrity of user-generated content that platform signatories could voluntarily abide by. Notably, the CoP spells out an explicit requirement for platforms to work together with fact-checkers and researchers to combat disinformation. 


Shifting Winds of Political influence 


Naturally, major platforms have not taken too kindly to this development, preferring to align themselves with Trump’s regulation-lite approach instead. 


Berlin-based think tank Democracy Reporting International found that on average, online platforms have cut their CoP commitments by 31% in 2025. Major areas that platforms withdrew from include fact-checking (Microsoft and Google) as well as political advertising with the main justification being that fact-checking tools offered in-house would suffice. 


Once proclaiming that it had spent a cumulative $100mn on fact-checking initiatives, Meta has confirmed that it intends to end third-party factchecking across all of its platforms in favour of a community-based approach in January 2025 beginning with US-based users. In light of still untested EU regulation (which it would likely violate), Meta reported that it had “no immediate plans” to terminate fact-checking in the EU. 


As for political advertising, the 3 platforms (Google, Microsoft, TikTok) that withdrew previous CoP commitments have cited their respective platforms’ existing ban on political ads. However, as we have established previously, this ban remains far from watertight in practice.


With the CoP now entering into law, platforms will also be audited annually on the degree of compliance regarding the commitments made as per article 37(1)(b). Today, the CoP is now seen as a set of industry best practices that can be used by the Digital Services Coordinator in individual member states to assess compliance with the DSA. While platforms remain free to decide which specific commitments they wish to subscribe to, the refusal to participate in the CoP could be interpreted as a potential violation of the platform’s overall DSA obligations and potentially warrant grounds for a stiffer penalty should infringements be found. 


Change is also on the horizon for political advertising. From 10 Oct 2025 onwards, stricter transparency requirements apply for online political advertising that target EU citizens. Key elements include mandatory labels for each ad that provides additional context on its sponsor while ads paid for by parties based outside of the EU are banned 3 months prior to any election. 


As social media establishes itself as the digital town square for large swathes of the population, the need to introduce rules that safeguard the integrity of conversations inevitably arise. The EU’s regulation-heavy approach has elicited sharp rebukes from the White House and would likely be the target of legal wrangling for years to come. Other nations around the world have referenced the EU approach in formulating their own rules to safeguard their citizens’ digital spaces, forming the one silver lining in the long-drawn battle of tech regulation. 


References

  1. Cecco, L. (2025, April 18). Dramatic rise in fake political content on social media as Canada prepares to vote. The Guardian. https://www.theguardian.com/world/2025/apr/18/canada-fake-political-content-social-media

  2. Cole, D. (2024, December 6). Romanian court annuls first round of presidential election. The Guardian; The Guardian. https://www.theguardian.com/world/2024/dec/06/romanian-court-annuls-first-round-of-presidential-election

  3. Diaz Crego, M. (2024, April). Towards new rules on transparency and targeting of political advertising. European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/733592/EPRS_BRI(2022)733592_EN.pdf

  4. European Commission. (2022). Code of Practice on Disinformation | Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation

  5. Graves, L. (2025, January 13). Will the EU fight for the truth on Facebook and Instagram? The Guardian. https://www.theguardian.com/technology/2025/jan/13/meta-facebook-factchecking-eu

  6. Harwell, D., Morse, C. E., & Davies, E. (2025, June 3). Tallying Trump’s online posting frenzy: 2,262 “truths” in 132 days. The Washington Post. https://www.washingtonpost.com/technology/2025/06/03/trump-truth-social-twitter/

  7. Humphrey, M. (2021). I analyzed all of Trump’s tweets to find out what he was really saying. The Conversation. https://theconversation.com/i-analyzed-all-of-trumps-tweets-to-find-out-what-he-was-really-saying-154532

  8. Loftus, A. (2024, December 17). EU investigates TikTok over alleged Russian meddling in Romanian vote. BBC. https://www.bbc.com/news/articles/cm2v13nz202o

  9. Lyndell, D. (2022, December 28). TikTok in service of FSB. How a social network for funny videos turned into a Kremlin propaganda mouthpiece. The Insider. https://theins.ru/en/society/258228

  10. Pastor-Galindo, J. (2025, February 17). Influence Operations in Social Networks. Arxiv.org. https://arxiv.org/html/2502.11827v1

  11. Radu, R. (2025, June 25). TikTok, Telegram, and Trust: Urgent Lessons from Romania’s Election. Tech Policy Press. https://www.techpolicy.press/tiktok-telegram-and-trust-urgent-lessons-from-romanias-election/

  12. Rincón, D. A., & Meyer-Resende, M. (2025, July 2). Big tech is backing out of commitments countering disinformation—What’s Next for the EU’s Code of Practice? Democracy Reporting International. https://democracy-reporting.org/en/office/EU/publications/big-tech-is-backing-out-of-commitments-countering-disinformation-whats-next-for-the-eus-code-of-practice

  13. Seibt, S. (2024, February 27). “Kremlin Leaks”: Files detail Putin’s €1 billion propaganda effort ahead of presidential vote. France 24. https://www.france24.com/en/europe/20240227-kremlin-leaks-files-detail-putin-s-%E2%82%AC1-billion-propaganda-effort-ahead-of-presidential-vote

  14. Thorne, E. W. (2024, March).  Meta winds down Facebook News. LinkedIn News. https://www.linkedin.com/news/story/meta-winds-down-facebook-news-5957988/

  15. Transparency and targeting of political advertising | EUR-Lex. (2025, October 10). EUR-Lex. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=legissum:4741696

  16. Vanttinen, P. (2023, June 6). TikTok greatly influenced Finland’s latest elections: survey. https://www.euractiv.com/section/politics/news/tiktok-greatly-influenced-finlands-latest-elections-survey/



Comments


bottom of page