AI image generators like Midjourney, DALL·E 3, and Stable Diffusion are producing increasingly realistic images. Whether you're a journalist verifying a source, a social media manager screening content, or simply trying to tell real from fake — knowing how to spot AI-generated images is a critical modern skill.
This guide covers both visual tells you can spot with your eyes and tool-based detection methods for higher-certainty verification.
Visual Tells: What to Look For
1. Hands and Fingers
Despite massive improvements, hands remain AI's Achilles heel. Look for: extra fingers, missing fingers, fingers that merge together, thumbs on the wrong side, or hands that seem to dissolve into objects they're holding. This is often the fastest visual check.
2. Text and Lettering
AI-generated text almost never makes sense. Signs, book covers, t-shirt prints, and street names will contain garbled or meaningless characters. If you can read every word in an image clearly, it's more likely to be a real photo (or very carefully inpainted).
3. Reflections
Check mirrors, sunglasses, water surfaces, and shiny objects. In real photos, reflections are physically consistent with the scene. AI often generates reflections that show a completely different scene, or that don't match the geometry of the reflecting surface.
4. Backgrounds and Edges
Zoom into background details. AI images frequently exhibit "melting" architecture, trees that blend into buildings, windows at impossible angles, or repeated patterns that shift unexpectedly. Edges where subjects meet backgrounds may show unnatural blending or halos.
5. Skin and Hair Texture
AI-generated portraits often have unnaturally smooth skin with a "plastic" quality. Hair strands may merge together or terminate abruptly. Earrings and jewellery are often asymmetric or physically impossible.
6. Lighting Consistency
In real photos, all objects share the same light source. AI sometimes lights different parts of an image inconsistently — shadows pointing in different directions, or highlights that don't match the overall scene lighting.
Quick checklist for visual detection:
- ✓ Zoom in on hands — count fingers, check anatomy
- ✓ Read all visible text — is it coherent?
- ✓ Check reflections — do they match the scene?
- ✓ Inspect backgrounds at 100% zoom
- ✓ Look for skin/hair unnaturalness
- ✓ Verify lighting direction consistency
Tool-Based Detection Methods
AI Image Detectors
Dedicated AI image detection tools analyse pixel-level patterns that are invisible to the human eye. These tools look for statistical signatures left by generation models — frequency domain anomalies, GAN fingerprints, and diffusion model artifacts.
For a deeper dive on how these detectors work and their limitations, see our AI Image Detector Guide.
Metadata Analysis
Real photographs contain rich EXIF metadata: camera model, lens, shutter speed, ISO, GPS coordinates. AI-generated images typically have minimal or generic metadata. Use an EXIF Viewer to check — if a "photo" has no camera information, that's a strong red flag.
Reverse Image Search
Sometimes the most effective check is the simplest: does this exact image exist elsewhere? A reverse search can reveal if an image was created by AI and shared on platforms where it was flagged, or if a very similar (but not identical) AI-generated variant exists.
The Multi-Signal Verification Workflow
No single method is foolproof. For reliable detection, combine multiple signals:
- Visual inspection — check hands, text, reflections, backgrounds (30 seconds)
- Metadata check — look for camera data in EXIF (15 seconds)
- AI detector scan — run through a detection tool (30 seconds)
- Provenance check — reverse search for the image source (1 minute)
- Context analysis — does the image make sense in context? (varies)
For a broader approach to verifying online images, see our guide on social media image verification.
The Arms Race: AI Generation vs. Detection
It's important to understand that AI generation and detection is an ongoing arms race. As detectors improve, generators adapt — and tools like Anti-AI converters explicitly try to remove detection signatures. This means:
- No detector will ever be 100% accurate
- Visual tells that work today may be fixed in next-generation models
- The multi-signal approach remains the most robust strategy
- Provenance and context will become increasingly important
When Detection Matters Most
Not every AI image needs to be "caught". AI art, illustrations, and creative content are perfectly legitimate uses. Detection matters most when:
- News and journalism — verifying photo evidence for reporting
- Social media — identifying misinformation and fake profiles
- Legal and forensic — evidence authenticity in court proceedings
- Academic integrity — verifying original work in education
- Dating and identity — spotting fake profile photos
Frequently Asked Questions
Will AI images become completely undetectable?
It's unlikely that all detection methods will fail simultaneously. While pixel-level artifacts may disappear, metadata analysis, provenance tracking, and contextual verification will remain viable. The field is evolving rapidly on both sides.
Should I trust an AI detector score of 50%?
A 50% score is essentially inconclusive. Treat it as one data point among many. Cross-reference with visual inspection and metadata analysis before drawing conclusions.