In moments of heightened geopolitical tension, visuals often travel faster than facts. The current Iran–US–Israel conflict has brought a wave of recycled footage, altered photos and AI‑generated scenes circulating online, each capable of shaping public perception long before the truth catches up.

To help make sense of this landscape, we turn to Dr Sabrina Caldwell, a leading authority on digital image integrity and co‑chair of the JPEG Trust standard. She explains why conflicts create ideal conditions for visual misinformation, what forms it most commonly takes, and how the public can better recognise and respond to deceptive imagery.

Why do wartime environments create such fertile ground for this kind of visual misinformation? 

Since early last century, visual mis- and disinformation has played a key role in war. Warring nations understand that the way the conflict is perceived can be almost as important as the combat itself, and they know that visual disinformation can skew that perception in their favour. 

Because wartime environments are fast-paced, emotionally charged, and often information-deficient, visual evidence is very important.  In this volatile landscape, fake images can more easily remain undetected.  This interference with the visual record can produce a wide range of significant impacts: military advantage, support for political agendas, financial gain, swayed public sentiment, or the creation of an information fog in which real facts are difficult to discern. 

With the growth of technologies in recent decades - digital cameras, image editing, AI, ‘deep fakes’ – the ability to manipulate public perception with visual mis- and disinformation has never been greater, while the ability for the world to deal with fake imagery is still limited.  

What types of image manipulation are you seeing most commonly during conflicts like this one, and why are they so effective in deceiving the public? 

Fake images are very persuasive, often because people want to believe, especially when the image is interesting, or in the absence of other information forthcoming. Once a fake image is viewed, it has an impact. We assimilate the meaning of the image almost before we even think about it. We can’t ‘unsee’ it.  Even if a wartime disinformation campaign suffers a backlash when viewers learn the images were fake, their influence lingers, especially if the emotional weight of social media interest is added. Furthermore, most people will not look at an image long enough to consider investigating. 

Manipulating authentic photos to create fakes is common, such as the 2014 photo purportedly showing the Ukrainian city of Donetsk burning, but later shown to be a manipulation of a tranquil photo of the city in 2011.

Another type of false image is misappropriation, in which a photo of a previous conflict is used to illustrate a subsequent conflict, such as the viral post using the photo of a burning tank from the 1989 Chinese Tiananmen Square conflict to illustrate a 2014 conflict in Donbass, Ukraine.

AI has raised the stakes in visual mis- and disinformation; new images can simply be described to an AI content generator to come into being.  Just last month, The Netherlands' largest news agency, ANP, deleted more than 1000 images of the Iran conflict from their archives over concerns they had been generated or manipulated with AI.  The images were provided to them by French news agency ABACA and ultimately identified as issued from Iranian news agency SalamPix.

What are some practical cues people can look out for?

Manipulated photos often show many hallmarks of having been edited, including poor integration of elements in an image and inconsistent shadows. Misappropriated images can be investigated using reverse image searchers (TinEye, Google Lens) to determine if the image already exists in a different context. 

With AI, the more complex the image, the more obvious it is to the observant human viewer.  This is because many of the individual elements will not look right: buildings with warped or crooked windows, misshaped digits on human hands or animal feet, or objects floating without any visible means of support.  A range of AI detector tools have been developed, but benchmarking shows they are not yet reliable.

The most important thing is that people apply critical thinking and scepticism to their judgement of the images they view, especially before sharing them on social media. 

As co-chair of JPEG Trust, can you explain how this standard helps verify the authenticity and provenance of digital images and how it can be valuable during fast moving conflicts? 

Rather than having reactive approaches to fake images, it is much better to have a proactive approach. As Benjamin Franklin said in 1735, an ounce of prevention is worth a pound of cure. Implementing a trust solution to enable proper vetting of images as they are received, especially where they form part of a business pipeline, can assist organisations in assessing the trustworthiness of images before they become a problem.

A trust solution for an organisation is like a gatekeeper as the image arrives, and a safeguard as the image is stored or re-distributed.  Such a solution must be interoperable across organisations and countries, but also needs to recognise that trust is different for different people in different circumstances. The trust context of a photo of a damaged car sent by a spouse to their partner is vastly different to the trust context of that same photo sent to an insurance company.

Differential trust is a key element of the JPEG Trust framework and its underlying specification. Media assets such as images have associated details such as metadata, provenance information, and watermarks. In the JPEG Trust framework, these details (or lack thereof) are turned into indicators of trust that can be meaningfully evaluated by an organisation’s trust profile for that media.  A simple example of this is that if the metadata of a purported 'photo' has no camera metadata but does have AI content generator metadata, this would be revealed in the trust report.

The JPEG Trust framework is now an international standard (ISO/IEC 21617), the first digital media trust solution to achieve this status.  As an ISO standard, JPEG Trust supports interoperability of trusted media exchange across businesses, governments, communities and countries. Already, major camera manufacturers (Sony, Leica, Google Pixel and Canon) are producing cameras that can create JPEG Trust ‘ready’ photo files.

What needs to happen (from tech companies, government etc.) to see widespread adoption of tools like JPEG Trust so they can meaningfully curb visual misinformation during crises? 

There is much that governments and tech companies can do to re-establish trust online.  Around the world, governments are already trying to stem the rush towards an AI-fuelled ‘dead Internet’ in which no one knows what is real and true. The EU, China, USA and others are developing legislation mandating that AI content is clearly labelled as such through the use of watermarking. The European Commission is working toward interoperable trust across its constituent countries, and to that end has put JPEG Trust on its 2026 Rolling Plan for ICT standardisation.

For tech companies, the business cases are complex, but still compelling. While short term financial incentives for supporting image authenticity are not as strong as the profit motive for allowing users unfettered uploading of images, in the long term it will be important for tech companies to recognise that without trust and understanding, media assets such as images become meaningless, and the value of digital media will degrade.

To keep societal health and these platforms afloat, tech companies need to at least stop practices that subvert our understanding of the meaning of our visual information, such as social media platforms routinely stripping image metadata on upload. At best, tech companies can offer users the ability to display trust reports on media so that we can live our lives and take our decisions based on real information instead of fake.

In wartime environments, governments, tech companies and communities need to work together to combat visual mis- and disinformation, because in a world where images have become instruments of conflict, carefully crafted visual disinformation, uncritically factored into decisions and influenced by misinformed public perception, can turn the tide of a war.


Dr Sabrina Caldwell

Dr Sabrina Caldwell is an expert in online media authenticity and trust.  As a member of the JPEG Committee and Co-Chair of JPEG Trust, she collaborates internationally on establishing standards for trust in our online images and other media.

If you would like to speak to Dr Sabrina Caldwell further on this topic to assist your reporting, please contact UNSW Canberra Media at media.cbr@unsw.edu.au