Indian Anti-Pakistan AI-Generated Fake Social Media Propaganda Project
In the digital era, where artificial intelligence (AI) plays a significant role and information spreads instantaneously, digital media has become a key battleground for competing narratives, ideologies, and geopolitical strategies.
This factor is particularly evident in the tense relations between India and Pakistan. The virtual conflict is ongoing, with a particular focus on India’s use of propaganda and disinformation to shape public opinion and influence global discourse. As Nazi propaganda minister Joseph Goebbels famously said, “If you tell a lie big enough and keep repeating it, people will eventually come to believe it.”
This approach seems to drive Hindutva propagandists, as disinformation thrives on emotional manipulation and ambiguity. India exploits historical grievances, cultural stereotypes, and fear to craft a potent mix. False narratives regarding Pakistan’s nuclear program, alleged state-sponsored terrorism and religious extremism have taken root online, influencing public perceptions and policy far beyond the reach of mere trolls and bots.
A recent revelation of a vast network of artificially AI-driven fake social media accounts has emerged, which have developed pro-Indian government and military narratives for the past three years. Under this operation, over 1,400 accounts across X and Facebook platforms are spreading propaganda aimed at influencing Indian audiences. The campaign was mostly invisible and focused on promoting pro-India and anti-Pakistan content, besides targeting more countries like China and Bangladesh.
This would help to show how AI-generated profiles are increasingly used to serve nationalistic agendas and shape public opinion in cyberspace. Such campaigns require state-of-the-art monitoring services, digital footprint analysis, and online risk assessment tools in the increasing complexity of the digital landscape.
Researchers found that the network consists of at least 500 Facebook and 904 X accounts, continually posting pro-Indian government and military content since September 2021. The accounts were operated using factious usernames that dished out Modi’s administration and presented Indian military as a force to be reckoned with while dishing out narratives on Pakistan among other neighbouring states.
Still, the implementation of the operation appeared non-sophisticated. Countless posts with similar secondary texts seemed to have contained fragments of AI-sourced content. These accounts never published any false information themselves, but they were always keen on sharing posts created by government-friendly publications like Hindustan Times etcetera. Researchers feel that many posts were notable for clumsy half-baked sentences and low-grade poor English constructions that indicate the content does not have enough human oversight.
Information operations, powered by AI, are becoming more common, but this “operation” was conducted with such incompetence that finding the patterns was a relatively simple task. For instance, in June, a pro-India account called JK News Network smeared Pakistan for ill-treating religious minorities in Balochistan. Within the same propaganda network, some 429 other accounts repeated this identical messages word by word.
This kind of repetition in posts made the operation look unchiseled and undermined its credibility. Though AI can stimulate hundreds of pieces of content within a short time, campaigns that do not have the right oversight or creative spin are bound to fail to resonate with the public.
Despite all these endeavours, hardly any evidence shows that this network affected its concerned audience as posts received no engagement and attention. Experts on the issue suggest that among other reasons for this campaign’s failure in connecting people to actual public conversations is the fact that it has run out of creative ideas.
While fewer influence campaigns may directly make an impact, they can still pose risks. Even without viral diffusion, the sheer volume of these posts can create noise within that online environment, confusing users and distorting the conversation.
Fake accounts and coordinated disinformation campaigns have been criticized for targeting key platforms such as X and Facebook. Although enhanced efforts at detecting and removing inauthentic behaviour have picked up the pace, this kind of campaign so vividly portrays many challenges ahead.
Most of the network’s accounts were detected using tactics such as changing usernames and deleting or slightly modifying posts to create an appearance of the content. However, the exposure of this propaganda network calls for digital footprint analysis and threat monitoring advanced tools to discern and dismantle them. Governments and agencies must invest in tools, such as darknet monitoring, brand protection services, and digital threat scoring to counter such threats. These tools can alert and help in the identification of fake accounts, misleading content, and harmful online activities.
With the increased availability of AI tools and techniques, their misuse of propaganda in the online sphere today is more likely than ever. For it is against this morphology of digital threats that social media platforms, governments, and cybersecurity experts must collaborate and detect and take measures against them. It could all come through collective vigilance and more sophisticated tools safeguarding online discourse while preventing both foreign and domestic disinformation campaigns.
Shahzad Masood Roomi