Minab Photo Misattributed by AI Agents—Verification Playbook for Leaders

When AI Confidently Lies About a Cemetery: What Leaders Must Do

TL;DR: A viral photo from Minab, Iran was authentic, but major AI agents (Google’s Gemini and X’s Grok) misattributed it to other disasters. Generative AI can sound authoritative while inventing facts. Leaders must stop treating AI assistants as verifiers, add quick OSINT checks to workflows, demand provenance from vendors, and run tabletop exercises to measure risk.

The Minab photo: a short case study

A single photograph of a freshly dug cemetery near Minab circulated widely as evidence of civilian deaths. Social feeds and messaging apps amplified it. Then AI assistants weighed in — confidently wrong.

  • Google’s Gemini suggested the image showed burial pits from the Kahramanmaraş earthquake in Turkey (2023).
  • X’s Grok pointed to COVID burials at Jakarta’s Rorotan cemetery in July 2021.
  • Both assistants supplied what appeared to be specific sources or citations. Some links went nowhere; others didn’t contain the claimed images.

Open-source investigators and verification teams (including BBC Verify and reporting in major outlets) used satellite imagery, multiple local photos and videos, and cross-angle analysis to confirm the Minab image’s location and timing. The photograph was real. The assistants were wrong.

Tal Hagin: “These systems are extremely advanced probability machines, not reliable truth machines.”

Why did AI agents get it wrong?

Start with a plain definition: a hallucination is when an AI invents facts or sources that don’t exist. Large language models predict likely words and phrases. Because they are trained to keep text fluent and convincing, they can produce plausible but false statements and even fabricate citations.

Two technical factors matter most for decision-makers:

  • Probabilistic output: Models optimize for coherence, not factual verification. That makes confident-sounding errors common.
  • Retrieval and traceability gaps: Some systems use retrieval-augmented generation (RAG) with explicit source links; others generate answers without strong provenance, making claims hard to audit.

Research backs up the risk. Surveys and studies reported that roughly 65% of people regularly encounter AI-generated summaries of news or information (Pew Research). Use of generative AI for current events roughly doubled in a recent year (reported by Nieman Lab). A 2025 international study found about half of AI-generated summaries had at least one significant sourcing or accuracy issue; in some tool-by-tool comparisons, error rates for sourcing rose as high as about 76% for particular models.

What this misidentification costs

Wrong AI assertions are not only embarrassing. They create three practical harms that leaders must treat as operational risks:

  • Wasted verification time: Factcheckers and investigators spend scarce hours debunking AI-generated claims instead of documenting events and collecting evidence.
  • Erosion of trust: When AI confidently denies or misattributes genuine tragedies, public trust in real evidence erodes and families feel retraumatized.
  • Operational risk: Corporations and governments that act on unverified AI summaries can damage reputation, misallocate resources, or impede legal and humanitarian accountability.

Shayan Sardarizadeh: “Factcheckers are now routinely debunking both false posts and the misleading claims produced by chatbots about those posts.”

Chris Osieck: “Time spent disproving AI-generated fakes takes away from documenting the human impact of war; and falsely calling real tragedies fake is deeply disrespectful to grieving families.”

Verification playbook for leaders (operational)

Teams must treat AI agents as hypothesis generators, not sources of truth. The following checklist is a practical, repeatable workflow for communications, legal, and incident-response teams.

  1. Label and triage: Mark any AI-identified image as “unverified.” Do not publish until basic checks complete.
  2. Quick technical checks (under 15 minutes):
    • Reverse-image search (Google Reverse Image, TinEye).
    • Check for duplicates and earlier instances; look for inconsistent metadata or timestamps.
    • Inspect video frame-by-frame where applicable (InVID, Forensically).
  3. OSINT corroboration (30–90 minutes):
    • Compare with satellite imagery (Google Earth, Planet Labs) for location signatures.
    • Search local-language sources and social accounts for matching angles or eyewitness footage.
    • Ask on verification channels used by newsrooms or NGOs (e.g., established verification networks).
  4. Escalate when high-stakes: If legal, operational, or humanitarian consequences are possible, activate forensic partners and legal counsel. Preserve chain-of-custody and logs.
  5. Communicate clearly: If you must respond publicly, state what you know and what you are verifying: “We are investigating this image; here are the steps we’re taking.” Avoid overconfident denials or confirmations.
  6. Post-incident: Log findings, update vendor risk profiles, and run a short after-action review to measure time lost and corrective actions.

Vendor and procurement checklist

When buying AI tools or embedding AI for communications, insist on these minimum features:

  • Provenance and content credentials: Support for standards like C2PA or Content Credentials so you can trace origins.
  • Watermarking: Robust, verifiable watermarking for generated media.
  • Source transparency: RAG with persistent, inspectable links to source documents and timestamps.
  • Uncertainty signaling: Models should present confidence ranges, not just single authoritative statements.
  • Audit logs and SLAs: Access to logs that show retrieval steps and a vendor commitment to correct harmful output quickly.

Three quick prompts to test an AI assistant

Use these probes in-house to check how your AI agents handle provenance and uncertainty.

  1. “List the specific sources you used to identify this image. Provide URLs and timestamps.”
  2. “Explain how confident you are about this attribution on a 0–100% scale and why.”
  3. “If you used external images, show the closest matching image and explain differences in angle, timestamp, and location indicators.”

Where AI helps — and where to be cautious

Generative AI and AI agents excel at triage and summarization. They can scan streams, surface suspicious items, and draft social replies quickly. That is valuable for AI for business and AI for communications workflows, particularly when paired with human judgment.

But where evidence matters — legal claims, human-rights documentation, or reputational crises — AI must be an assistant, not the decider. Insist on human sign-off for anything that can trigger legal action, large communications, or operational deployments.

Quick Q&A

Was the Minab cemetery photograph authentic?

Yes. Satellite imagery and multiple local photos and videos corroborated the photo’s location and timing.

Can mainstream AI assistants be trusted to verify news images?

Not reliably. They can produce authoritative-sounding but incorrect identifications and citations because they are optimized for fluent output rather than fact verification.

How widespread is generative-AI misinformation in conflicts?

It’s increasing; verification teams report a rising share of viral falsehoods are AI-generated or AI-augmented, in addition to recycled fakes.

What should leaders do now?

Reassess reliance on automatic AI summaries for critical decisions; invest in OSINT and forensics capability; demand provenance and transparency from AI vendors; and run tabletop exercises to prepare teams.

Take action this quarter

Run a 30–60 minute tabletop exercise with communications, legal, and operations teams. Scenario: a viral image flagged by your AI assistant alleges a workplace casualty or local atrocity. Walk through the verification playbook, test the AI probes above, and measure time-to-decision. Use the results to set KPIs: acceptable verification time, false-positive rate, and escalation thresholds.

AI automation will reshape workflows. Treat it like a powerful spotlight: it reveals many things quickly, but sometimes it lights up shadows and calls them facts. Pair that speed with verification muscle and clear vendor requirements so technology exposes truth instead of obscuring it.

Author: I work with communications and OSINT teams helping organizations measure and mitigate information risk. I advise leaders on integrating AI agents responsibly into critical workflows.