“Truth is the first casualty in war” – Aeschylus
Pakistan has been entrenched in a state of conflict for the last two decades. Following 9/11, a cascade of geopolitical events plunged Pakistan into a vicious cycle of violence and instability. Apart from religiously motivated TTP, some sleeping sidewinders were also resurrected from hibernation like ethno-linguistic led insurgency in Baluchistan, a province that was otherwise quiet on internal security axis since 1970s. Beyond conventional methods of violence, these organizations have developed sophistication in the tactics of propaganda and misinformation. Social media and other emerging technologies played a crucial role in supporting their psychological operations (Psy-ops) capabilities against the security forces of Pakistan.
It is no secret that social media has become a powerful weapon of warfare in the 21st century. Along with revolutionizing the sources of communication, entertainment and information, social media become tools for manipulation, subversion and deception. This is exactly how religiously and politically extremist organizations have been deploying it. As a tactic of modern warfare; political actors, both state and non-state, use social media for polarization and propaganda campaigns to shape public opinion in their favour and to undermine democracy and human rights.
Hostile intelligence agencies are also actively engaged in this contemporary warfare against Pakistan. This phenomenon became apparent during recent years when it was revealed that India has established various fake news outlets worldwide and used them to malign Pakistan. These outlets generated tons of propaganda material for online consumption mostly targeting Pakistani masses particularly the young minds through social media.
In warfare and statecraft, propaganda is not a new phenomenon, but social media and now artificial intelligence have propelled it to an unprecedented level of sophistication and scale. With the help of artificial intelligence (AI), propaganda can be generated, distributed and tailored to specific audiences and contexts, with minimal human intervention and oversight. AI can also be used to create fake or manipulated content, such as videos, images and texts, that are indistinguishable from reality. These techniques are known as generative AI or deep fakes. These are not assumptions only, there has been cases of this growing trend of using AI along with social media by various actors in different parts of the world.
Some examples of how social media and AI are used for propaganda warfare are:
– In 2023, Venezuelan state media outlets used Synthesia, a company that produces custom deepfakes, to create AI-generated videos of news anchors from a non-existent international English-language channel, spreading pro-government messages.
– In the United States, AI-manipulated videos and images of political leaders circulated on social media, depicting President Biden making transphobic comments and Donald Trump hugging Anthony Fauci.
– In 2023, the U.S. government waged psychological warfare on the nation by adopting fake social media identities and AI-created profile pictures to target and capture potential suspects.
– In 2023, Ukraine became a living lab for AI warfare, as Russian-backed separatists used AI to generate disinformation and propaganda on social media platforms, targeting Ukrainian soldiers and civilians.
– In 2023, ISIS used AI to boost its online recruitment efforts, creating personalized messages and content for potential sympathizers and recruits.
These examples show how social media and AI are reshaping propaganda and disinformation operations in modern warfare. They pose serious challenges to democracy, security and human rights. They also raise ethical and legal questions about the responsibility and accountability of the actors involved, as well as the regulation and verification of online content.
In Pakistan’s context, evidences of extensive use of AI by anti-state organizations are not available, so far, however social media platforms now began offering AI enabled tools. And it is only a matter of time when we will witness manifold increase in anti-state propaganda both in terms of frequency and sophistication. Pakistani state needs counter measures in advance and the nation needs to be aware of this new era of falsehood.
To counter these threats, government, civil society and the private sector should start working together to develop effective strategies and solutions. Some possible measures are:
– Enhancing digital literacy and critical thinking skills among the public, especially the youth, to help them identify and resist propaganda and disinformation.
– Pakistan must raise voice on diplomatic channels for transparency and accountability of social media platforms and AI developers, by requiring them to disclose their sources, methods and algorithms, as well as to monitor and flag harmful or misleading content.
– Developing technical tools and standards for detecting and verifying online content, such as digital watermarks, blockchain-based authentication and independent fact-checking organizations. Government must encourage and support private sector to invest and introduce new social media platforms with infrastructure within Pakistan.
– Pakistan’s foreign policy must strive for strengthening international cooperation and dialogue among stakeholders, by sharing best practices, exchanging information and building trust. China can help Pakistan in this area.
Social media and AI have transformed the nature of modern warfare. They have also created new opportunities for dialogue, participation and empowerment. It is up to us to use them wisely and responsibly.