AI-Generated Faces Fool Most People, But Photo Training Improves Detection

A grid of 20 headshots featuring a diverse group of men and women, all smiling and posed outdoors or in natural light, with various backgrounds including greenery, beaches, and blurred scenery.
AI can now produce highly realistic face images (top and middle row) that closely resemble photographs of real people (bottom row).

AI is now able to generate images of faces that many people mistake for real photographs, but new research suggests that five minutes of training can improve detection.

Researchers from the University of Reading, the University of Greenwich, the University of Leeds, and the University of Lincoln in the U.K. tested 664 participants on their ability to distinguish real human faces from images generated by AI software known as StyleGAN3.

In the study published last month in the journal Royal Society Open Science, the scientists explain that AI-generated faces have reached a level of realism that defeats even “super recognizers,” a small group of people with exceptionally strong facial recognition skills. According to the researchers, super recognisers performed no better than chance when attempting to identify which faces were fake.

Without any training, super recognizers correctly identified AI-generated faces 41% of the time. Participants with typical face recognition abilities performed even worse, correctly identifying fake faces just 31% of the time. A random guess would result in an accuracy of around 50%.

However, the researchers found that a short training session could significantly improve performance. A separate group of participants received five minutes of instruction highlighting common AI image errors, such as unnatural hair patterns or an incorrect number of teeth. After the training, super recognisers correctly identified fake faces 64% of the time, while participants with typical abilities achieved an accuracy rate of 51%.

Dr Katie Gray, the study’s lead researcher from the University of Reading, writes that the increasing realism of AI-generated faces presents real-world risks.

“Computer-generated faces pose genuine security risks. They have been used to create fake social media profiles, bypass identity verification systems and create false documents,” Gray says in a University of Reading press release. “The faces produced by the latest generation of artificial intelligence software are extremely realistic. People often judge AI-generated faces as more realistic than actual human faces.”

“Our training procedure is brief and easy to implement. The results suggest that combining this training with the natural abilities of super-recognisers could help tackle real-world problems, such as verifying identities online.”


Image credits: Header photos by Gray et al, Royal Society Open Science 12250921 (2025) CC-BY-4.0).

Hot this week

X Will Penalize Creators Who Share AI-Generated War Videos Without Disclosure

X will temporarily bar creators from monetizing their content...

Sarah Pidgeon Is Rhode’s Newest Muse

When you embody Carolyn Bessette Kennedy as well as...

Tehran’s Golestan Palace Damaged in US-Israel Bombing

The Golestan Palace, a UNESCO World Heritage site in...

Zach Galifianakis & Billy Magnussen: ‘The Audacity’ Tech Series Trailer

Zach Galifianakis & Billy Magnussen: 'The Audacity' Tech Series...

Topics

spot_img

Related Articles

Popular Categories

spot_imgspot_img