These days, when you browse online, you are likely to come across AI (Artificial Intelligence) generated images. For example, when shopping, you might interact with customer service chatbots, or when browsing Instagram or TikTok you might come across AI photos.
Sometimes people’s online interactions with AI are pretty funny…
But how good are people at detecting when AI is being used?
To test this question, we showed people pictures of many different images of White faces and asked them to decide whether each face was real or generated by AI. Half the pictures were of real faces, while half were AI-generated. If participants guessed randomly, we would expect them to be correct about 50% of the time (a bit like flipping a coin!)
However, instead, people were systematically wrong - that is, they were more likely to say AI-generated faces were real. On average, people labelled about 2 out of 3 of the AI-generated faces as human.
Using data from another study, which also tested Asian and Black faces, we found that only White AI-generated faces looked hyperreal. When asked to decide whether the Asian and Black faces were human or AI-generated, participants guessed correctly about half the time – now more like random guessing.
This means the White AI-generated faces tested in our study look more real than AI-generated Asian and Black faces, as well as White human faces. This bias we found in our study likely stems from the fact that AI algorithms, including the one we tested, are often trained on images of mostly White faces. Such biases in algorithmic training can have serious implications (e.g. for the safety of self driving cars).
What does it mean for the future?
Overall, we think that both AI companies and our governments have a duty of care to make sure that these new technologies don’t inadvertently discriminate against certain groups of people. Governments who are grappling with this technological advance also have a duty to make sure that AI technology does not disrupt the democratic process.
More generally, the increasing realism of AI-generated information also raises questions about our own capabilities to recognise when AI is being used.
One clear message for people is to take a wee moment to stop and think before making any important decisions online (for example, before transferring money to someone). This advice is not new, such as the government Stop! Think Fraud campaign, but it might stop us falling for an online trick.
Interested to know more?
We wrote more about our findings for the public here: Can you spot the AI imposters?
You can read the scientific paper here: AI hyper-realism: Why AI faces are perceived as more real than human ones?
This research was conducted by an international team of researchers from Australia (Australian National University), Canada (University of Toronto) and from the UK (University of Aberdeen, UCL).
Dr Clare Sutherland, School of Psychology