Computer scientists have been looking for ways to recognize images quickly and easily.
Computer-generated faces have been so perfect lately that it’s hard to tell them apart from the real thing. This makes them a useful tool for malicious Internet actors who, for example, can use them for fake profiles in social media accounts created for malicious purposes.
Therefore, computer scientists were looking for ways to quickly and easily recognize these images. Hui Guo, a fellow at New York State University, and colleagues also found a way to expose false faces. The weak point of the artwork is the eye – they state in their study .
The technology behind the generation of synthetic faces is a form of deep learning based on generative adverbial networks. The point of the approach is to feed images of real faces into a neural network and then ask them to generate their own faces. These faces are then tested with another neural network that tries to detect fakes so the first network can learn from its flaws.
Back-and-forth battling between such “enemy networks” quickly improves the output, so much so that synthetic faces are already difficult to distinguish from real ones, which, however, are not perfect. Generative adverse networks, for example, have difficulty reproducing facial accessories such as earrings and glasses, which are often different on both sides of the face. However, the faces themselves appear realistic, making it difficult to reliably recognize them.
Guo et al. State that they found an error characteristic of artificially made representations. Generative adverse networks do not produce faces with regular pupils — circular or elliptical — and this provides a way to expose them.
Specialists developed software that extracts the shape of the pupil from facial images and then analyzed 1000 real and 1000 synthetically generated facial images. Each image was scored based on the regularity of the pupils.
Real human pupils are strongly elliptical in shape. However, synthetic products caused by irregular pupil shapes lead to significantly lower scores. This is the result of the way generative adverbial networks work, which have no original knowledge of the structure of human faces. “This phenomenon is caused by a lack of physiological constraints in GAN models,” Guo and his research team say.
This is an interesting result that provides a quick and easy way to recognize synthetic faces, provided the pupils are visible. “With this feature, one can easily visually determine if a face is real or not,” say the researchers, who say it would really be easy to create a program to perform recognition.
However, this immediately suggests that malicious operators have a way to overcome such a test. All they have to do is draw the pupils in a circle on the synthetic faces they create, which is a trivial task.
And herein lies the challenge between the creators of fake images and those who try to spot them in the cat-and-mouse game. That is, this battle is far from over, said Discover magazine , which reports on the study of the method.