Christian Berg: The main challenge is victim identification

Christian Berg: The main challenge is victim identification
14 May, 2019 Christian Berg
In Business, Technology

The main challenge is victim identification

In the NetClean Report 2018 we looked at a relatively new technology called Deepfakes; AI technology, which is used to swap one face for another in moving imagery. We asked if police officers come across deepfakes in child sexual abuse investigations, and if so, how common it is?

The report showed that one in five police officers had found deepfakes in their investigations, and three quarters of the surveyed police officers believed that the prevalence of deepfakes will increase in the future.  

Here, Christian Berg, founder of NetClean, elaborates on challenges that this technology presents, and what might prevent it from being widely used in the production of child sexual abuse material.

By Christian Berg, NetClean

Deepfakes is a technology that was more widely identified last Autumn, and discussed when media picked up on it in December 2017 and Spring 2018. When I read that almost one out of five police officers have seen deepfakes in their investigations, I thought that was a proportionately large number, and that it looks as if the technology has become widely known quite fast.

Can be used to create material

This is not very surprising. We know that technologies are used to produce child sexual abuse material; a prime example is the role that the internet and darknet have played since their inception. Deepfakes can be used to produce new online child sexual abuse material from already existing material. Using this technology, offenders can in addition produce material using images of children who have not been subjected to actual sexual assault, which means that in theory all children can become victims of this abuse.

“The biggest challenge with deepfakes is that they make it difficult to identify children, especially as the deepfake obscures the physically abused child’s real face. “

Time-consuming to produce

However, the risks associated with the use of this technology is negated by the fact that it takes a lot of man-hours and commitment to produce this material. A large number of images of a person’s face are required to produce a deepfake. In addition, a technical know-how to find out how deepfakes are produced is necessary (even if the technology in itself does not require advanced technical knowledge).

Something that also works against an even wider use of deepfakes is the fact that child sexual abuse material is readily available online, removing the need for more material to be created, especially as offenders have a predilection towards material that is genuine, above other forms of online child sexual abuse. However, there is no doubt that investigators will need to handle deepfakes, if they are not already doing so.

Victim Identification is a challenge

The biggest challenge with deepfakes is that they make it difficult to identify children, especially as the deepfake obscures the physically abused child’s real face. There is also a risk that investigators waste time looking for the wrong child, or for a child who in reality has not been sexually abused (even though one can argue that the film itself is sexual abuse).

In order to solve this problem investigators need ways to identify which films are deepfakes and the original film. In order to do so they need a good reference library of child sexual abuse films, which can be used to quickly determine if a film is “real” or not, if the material is already known and if the child in the film has been identified or not.