Every image is scored by five independent detectors:
Each detector weighs the signals that matter most for that specific image and shows exactly which signals influenced its answer.
Proof extracts 80+ signals across three groups: physics, behavior, and metadata. Physics signals measure what the camera sensor leaves in every pixel. Behavior signals detect patterns from AI editors, social platforms, and processing pipelines. Metadata signals check file structure, compression tables, and encoding fingerprints. Our signals measure each stage independently.
If any stage is missing or altered, our signals detect it.
AI image generators (ChatGPT, Midjourney, Stable Diffusion, DALL-E) synthesize pixels directly from neural networks. They skip the entire camera pipeline. No sensor noise, no Bayer pattern, no PRNU fingerprint, no hardware JPEG encoder.
Our 80+ signals across physics, behavior, and metadata measure what should be there from a real camera. When it's missing, we know. This is fundamentally different from AI detectors that try to recognize "what AI looks like," which breaks every time a new generator comes out.
Most AI detectors use neural networks trained to recognize what AI output "looks like." They break every time a new generator launches. Proof measures 80+ signals across physics, behavior, and metadata to verify the camera pipeline fingerprint in every pixel. Real cameras produce signals that cannot be faked regardless of what AI tool was used.
Every result includes per-image signal contributions showing exactly which signals drove the verdict. No black box.
All uploaded images are automatically deleted after 24 hours. Your data is never sold or shared. If you need an image removed sooner, contact contactus@proofme.ai.