Would I Lie to You? The Weaponisation of Social Media

December 5, 2022


While rumours, false information and fabricated content have long existed, new technologies are challenging information integrity in new and alarming ways. It is now acknowledged that, in some instances, digital technologies pose a severe threat to democratic practices across the ideological spectrum.

Social media platforms were once positively assessed in terms of expanding the participatory potential of social media, this has given way to an increased awareness of the risks that a generally unregulated sphere poses to democracies and human rights. “Out-group” communities are especially susceptible to organised disinformation, with ethnic and religious groups, migrants and refugees being among those targeted.

Extremist groups, seeking anonymized, transnational communication and connections have weaponised social media and digital platforms to radicalize and recruit vulnerable individuals. The Islamic state was a pioneer in the use of online and offline media to inspire, recruit and mobilise individuals and groups to carry out acts of terror.

This has led to research around how purposeful and targeted disinformation affects individuals and groups. Countering this threat has focused attention on how to monitor and disrupt these information streams that serve to manipulate and distort information as well as countering organized and focused disinformation operations.

However, evidence of the influence and impact of disinformation on political violence, extremism and radicalization remains opaque. What is more evident – and measurable – is the impact of organized disinformation and propaganda that increases polarization and intensifies social divisions.  

So, what can be done to counter, disrupt or inoculate against the tide of organized information pollution?

The tools in the arsenal for combatting disinformation are now well-known; strategic communications interventions aimed at countering content-based narratives; technology including artificial intelligence (AI) can block content and boost or down-rate searchability; fact-checking interventions by civil society, international organisations and academia have been undertaken across the globe. But, according to Nicola Mathieson - Research Director, Global Network on Extremism and Technology at the International Centre for the Study of Radicalisation (ICSR), the one thing these interventions have in common, is “we just don’t know how well any of them work”. Mathieson points out some of the limitations of fact-checking initiatives when it comes to preventing violent extremism:

The beauty of radicalization is you don't know what's happened to you…but it also means you're given these counter-narratives built into the conspiracy or the ideology. So, when a state comes forward with fact checking information or information about, for example, Covid is a really good example. The counter narrative is already being built in that the state is lying to you. Of course, this is what they're going to say. They're going to say, you're extremist, they're going to say you're stupid and unintelligent and not following the science.”

Much of the pioneering work focusing on countering disinformation in the context of PVE and CVE has now been broadened to look at the intentional harm aimed at democratic foundations such as government institutions, the media and elections.

Rappler has been at the forefront of multi-stakeholder research into disinformation in the Philippines. This unique position has allowed Rappler to track and measure the way that misinformation and disinformation in the Philippines has transformed over time.

Gemma Mendoza is a journalist based in Manila and the lead researcher of disinformation and platforms and head of digital services at Rappler. 

“We saw the shift online from 2016 and moving forward…from happy to anger. There was a marked difference in the language…and, and that's something that data scientists we've worked with have also noted. So, there's that, there's the question of narratives that are being seeded. There are questions of attacks against newsrooms. So, this is not just a question of misinformation that is, you know, accidentally shared, there is a purpose to the disinformation.”

Social media platforms, with a business model focused on increased engagement online, are, according to Gemma Mendoza a significant part of the problem and must, as such, be engaged to provide solutions. Yasir Khan, Editor in Chief at the Thomson Reuters Foundation explains that “Social platforms, they favour engagement over expertise, right? They're in this business to keep eyeballs on their platform… social platforms listen to their subscribers; social platforms and their algorithms adapt to how their subscribers and account holders behave. They adapt to their behaviour”.

There are no simple solutions to countering disinformation. More research is required to counter disinformation, but also to build and re-build trust in democratic institutions.

“We will be fighting disinformation for a very, very long time, but we will be fighting it if we're smart from many different directions: from the educational perspective; from the public health perspective that we just discovered during Covid; from a national security perspective, there are multiple layers to this. The people who create these problems have had it very, very easy. It's easy to create these problems, but in order to fix ourselves and our societies, it's going to take us a long time and…we cannot afford to ignore any of these angles”, said Khan.



Listen to our new podcast episode on disinformation and extremism on the Prevention of Violent Extremism Portal to learn more: https://pveportal.org/  or listen on Spotify here