• AF Weekly
  • Posts
  • When AI Hallucinates In Times Of War

When AI Hallucinates In Times Of War

At a time when two Asian countries with a most bitter history are engaged in a heated conflict, their citizens have taken up arms of their own on a different battlefield — social media. What most don’t realise, however, is that they are on an AI minefield

Graphic by Aarushi Agrawal for Asia Financial

Asia is on tenterhooks. India and Pakistan, two countries located almost at the centre of the continent, are currently engaged in a heated conflict that runs the risk of snowballing into war.

And while they are duking it out with jets, drones and propaganda, many on both sides have picked up artificial intelligence as their weapon of choice. The result has been a barrage of disinformation, AI simulated warmongering and deepfakes that are fuelling fear and tension among many in the two nuclear-armed nations.

The menace of AI-led disinformation in this conflict began soon after last month’s terrorist attack in Indian-administered Jammu and Kashmir — the event that sparked off the current conflict. AI-generated images of the attack’s purported victims flooded social media — especially X — and many used these to incite viewers on religious lines. Some of these images were viewed more than a million times.

Then, this week, as India carried out targeted strikes on nine locations it said were “terrorist infrastructures” in Pakistan, the use of AI exploded. Indian news broadcasters — many notorious for their nationalistic reportage — looped AI simulations of the attack, bringing the conflict and jingoism straight to their viewers’ homes. One news platform created AI-simulated videos of India’s military actions, claiming they were “a realistic portrayal” — which would be impossible unless the creators of those videos were actually sitting in the aircraft that carried out the attacks.

Meanwhile, a different kind of disinformation sowed hate among Pakistani viewers. Recycled footage of violence, some even of the 2020 Beirut Port explosion in Lebanon, were used to show the purported carnage from Indian strikes. Some images of the strikes were even screengrabs taken from the video game Battlefield 3, according to a report by the BBC.

The insanity of it all peaked on the night of May 8, however. Many Indian news channels broadcasted series of fake reports, going as far as to claim that India had attacked and captured Pakistan’s capital Islamabad. In the midst of it all a video emerged of Pakistan's Director General of Inter-Services Public Relations, Lieutenant General Ahmed Sharif Chaudhry, claiming India had shot down two of the country’s prized JF17 jets.

“We find it important to offer transparency and reaffirm our enduring commitment to regional stability… we regret to confirm that two JF17 aircraft were lost during active duty,” Chaudhry was seen and heard as saying in the video. The video, more than a minute long, showed Chaudhry switching between English and Hindi to acknowledge a “deep loss” for Pakistan and also featured an array of journalists seemingly attending the briefing.

A screengrab of the viral video showing Chaudhry. Via X

Those visuals spread like wildfire on social media. So much so that several Indian news publications — some of the country’s biggest names — went on to report the video as a fact. Turns out, however, that video was an AI-generated deepfake.

And if you’re sitting reading this with a look of disbelief — here’s something worse. When some users on X asked the platform’s AI chatbot Grok whether the video was a deepfake, it answered differently each time, and in some cases with hallucinations — a phenomenon where AI chatbots produce information that is factually incorrect.

One Indian journalist and fact-checker, Kritika Goel, listed out Grok’s various responses in a LinkedIn post, where it can be seen claiming things like “credible reports confirm India shot down one Pakistani JF-17 jet” and “there is no evidence suggesting [the video] is Al-generated.”

Now, AI hallucinations are in no a way a new problem. And disinformation in times of war is not a new phenomenon either — Greek poet Aeschylus said as far back as in the fifth century BC that “truth is the first casualty” in war.

What’s concerning is that as AI gets most powerful, scientists are finding that its hallucinations are actually worsening. And as we report below, no one seems to know why this is the case. So when an out-of-control technology starts spewing out-of-control lies… what would that mean for future wars?

Meanwhile, the conflict with Pakistan could put at risk India’s image as a ‘safe haven’ against Donald Trump’s tariffs and the $5 billion worth of inflows it has seen from foreign investors.
In other news, China has decided to dial down its rhetoric on US ‘bullying’ and begin trade talks as it fears tariffs could wipe out more than 10 million jobs.

Can’t say ‘I don’t know’

The AI industry has been grappling with the problem of hallucinations from the very beginning. Remember when Google lost a $100 billion after its AI chatbot was caught spewing inaccurate information at its launch? Leading AI developers like Google, OpenAI and Anthropic have all said they are working to fix the issue — and they have managed to keep the rate of hallucinations between 1 and 4%.

But according to a New York Times report this week, as they create stronger AI tools, the problem of hallucinations only worsens. These tools — reasoning models — are designed to “think” through complex problems to deliver an answer. And experts told NYT that at each step of this “thinking” process, these AI models run the risk of hallucinating. The result? A much higher rate of hallucinations. OpenAI has said in its own research papers that its newer reasoning models o3 and o4-mini tend to hallucinate more and that the company didn’t know yet why that was the case.

The latest viral AI phenomenon, China’s DeepSeek, is an even bigger offender when it comes to hallucinations. DeepSeek’s R1 reasoning model has a hallucination rate of 14.3%.

Some say that the answer to solving the issue is simply teaching chatbots to just admit “I don’t know.” But experts told the Wall Street Journal last month that that creates problems of its own: “if you don’t guess anything, you don’t have any chance of succeeding.”

Key Numbers 💣️ 

Sustain-It 🌿 
Hallucinations are not AI’s only problem. The technology is also notorious for driving energy demands which, some warn, will only increase the world’s dependance on coal-fired power plants. But around 2,000 coal-fired power plants need to be decommissioned from now until 2040 in order to meet global climate targets, the International Energy Agency says. So, in that regard, we had some good news this week from the Rockefeller Foundation, which launched a new carbon finance scheme to help phase out coal power plants in developing countries. The foundation is looking to sign up 60 projects in the next five years for early coal plant shutdowns under a scheme that aims to use carbon finance to help replace them with renewable power.

The Big Quote

“Despite our best efforts, they will always hallucinate… That will never go away.”

Amr Awadallah, the chief of Vectara, a start-up that builds AI tools for businesses, and a former Google executive, to the New York Times

Also On Our Radar