This International Fact-Checking Day, use these 5 tips to spot AI-generated content
Artificial intelligence-generated content is everywhere these days, making it increasingly difficult to separate fact from fiction, particularly when it comes to breaking news . Look no further than the Iran war. Since the U.S. and Israel attacked Iran on Feb. 28, researchers have identified an unprecedented number of false and misleading images that were generated using artificial intelligence and have reached countless people around the world. Among them, fake footage of bombings that never happened, images of soldiers who were supposedly captured and propaganda videos created by Iran that depict President Donald Trump and others as blocky, Lego-like miniatures. Thursday, the 10th annual International Fact-Checking Day, provides a good opportunity to look at these evolving challenges. Mi
Artificial intelligence-generated content is everywhere these days, making it increasingly difficult to separate fact from fiction, particularly when it comes to breaking news.
Look no further than the Iran war. Since the U.S. and Israel attacked Iran on Feb. 28, researchers have identified an unprecedented number of false and misleading images that were generated using artificial intelligence and have reached countless people around the world. Among them, fake footage of bombings that never happened, images of soldiers who were supposedly captured and propaganda videos created by Iran that depict President Donald Trump and others as blocky, Lego-like miniatures.
Thursday, the 10th annual International Fact-Checking Day, provides a good opportunity to look at these evolving challenges.
Misinformation created with AI is being shared with unprecedented speed from an endless number of sources. From the outset of the Iran war, accounts from all sides of the conflict promoted such content.
The Institute for Strategic Dialogue, which tracks disinformation and online extremism, has been examining social media posts around the Iran war. Among their findings was a group of X accounts that regularly post AI-generated content and collectively gained more than 1 billion views since the conflict began. This was done by roughly two dozen accounts, many of which had blue check verification.
Here are some tips for distinguishing AI-generated content from reality in an online world where that continues to get harder.
Look for visual cues
When AI-generated images first began spreading widely online, there were often obvious tells that could identify them as fabricated. Perhaps a person had too few — or too many — fingers or their voice was out of sync with their mouth. Text may have been nonsensical. Objects were frequently distorted or missing key components. As the technology continues to evolve, these clues aren’t as common as they once were, but it’s still worth looking for them. Watch for inconsistencies such as a car that is in a video one moment and gone the next or actions that aren’t possible according to the laws of physics. Some images may also be overly polished or have an unnatural sheen.
Seek out a source
AI-generated images get shared over and over again. One way to determine their authenticity (or lack thereof) is to hunt for their origin. Using a reverse image search is a simple way to do this. If you’re looking at a video, take a screenshot first. This can lead to a social media account that specifically generates AI content, an older image that is being misrepresented, or something entirely unexpected.
Listen to the experts
Look for multiple verified sources that can help authenticate the image. For example, that can mean a fact-check from a reputable media outlet, a statement from a public figure, or a social media post from a misinformation expert. These sources may have more advanced techniques for identifying AI-generated content or access to information about the image that is not accessible by the general public.
Make use of technology
There are many AI detection tools that can be a helpful place to start. But be wary, as they are not always correct in their assessments. Images that have been generated or altered with AI using Google’s Gemini app include an invisible digital watermarking tool called SynthID, which the app can detect. Other AI creation tools have added visible watermarks to content they generate. They are often easy to remove though, meaning the absence of such a watermark is not proof that an image is genuine.
Slow down
Sometimes it’s just about going back to basics. Stop, take a breath and don’t immediately share something you don’t know is real. Bad actors are often counting on the fact that people let their emotions and existing viewpoints guide their reactions to content. Looking at the comments may provide clues about whether the image you’re looking at is real or not. Another user might have noticed something you didn’t or been able to find the original source. Ultimately though, it’s not always possible to determine with 100% accuracy whether an image is AI-generated so remain alert to the possibility it might not be real.
See something that looks false or misleading? Email us at [email protected].
Find AP Fact Checks here: https://apnews.com/APFactCheck.
Explore Topics
- AI
- Artificial Intelligence
- fact checking
- misinformation
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
geminibillionnational
I Changed My Mind about Error-Correcting Debate, Misogyny and More: Updates from a Former Student of David Deutsch
I changed my mind about some things. These examples are illustrative of potential weaknesses of focusing on error correction and critical discussion like Karl Popper advised. I don't think the weaknesses are inherent or unavoidable. They're practical issues that don't require different epistemology principles to address. They're just ways you can go wrong if you don't know enough. The errors involved expectations around other people as well as underestimating the amount of knowledge needed for things. Sharing Evidence I used to think if I said something to people, and they had evidence that I was wrong, they would tell me. I took a lack of any negative responses as indicating they didn't have key evidence that I was missing. That has implications. E.g. if there are a dozen reasonable peopl

Microsoft deepens its commitment to Japan with $10 billion investment in AI infrastructure, cybersecurity, and workforce - Institutional Real Estate, Inc.
Microsoft deepens its commitment to Japan with $10 billion investment in AI infrastructure, cybersecurity, and workforce Institutional Real Estate, Inc.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

VA upshifts digital transformation efforts with new tech rollouts
The U.S. Department of Veterans Affairs took several steps in recent weeks to address historic administrative and operational challenges. The department said nationwide expansion of its Enterprise Scheduling System has dramatically accelerated community care appointment bookings for veterans, while a new Veterans Health Administration artificial intelligence-driven platform from Salesforce is automating workflows to resolve challenges quickly.






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!