THE EXPERIENCE AND PERSPECTIVE
OF THE POLICE

Technology development
– One in five police officers have found deepfakes

In previous reports we have looked at new trends that police officers encounter in their investigations. In this year’s report we have researched technology development, and how quickly it manifests in child sexual abuse investigations. We look more closely at a new technique called deepfakes. It was widely publicised in the media in Spring 2018. Deepfakes can with the help of machine-learning technology, or AI, swap a person’s face for another in moving imagery. Here we have looked at whether this technology has reached the realm of child sexual abuse material.

One in five police officers have found deepfakes

The large majority (more than 80 percent) of the surveyed police officers, report that they have not encountered deepfakes in their investigations. Fewer than one fifth report that they have encountered deepfakes. Out of these respondents a quarter report that deepfakes are common or very common in their investigations. However, just over 40 percent of those who have encountered deepfakes report that they are uncommon or very uncommon.

“More so seeing cartoon child pornography”

”It is very common to see photoshopped and deepfake videos and images on cases.”

Percentage of police officers who have encountered deepfakes:

A belief that it will increase

Three quarters of the surveyed polices officers believe that the prevalence of deepfakes will increase in the future, there were however responses that contradicted this belief.

”As it becomes more known it will increase like everything else with technology”

“If there is an easy way to do it someone will use it to produce new material”

“Since most of our cases involve subjects that have average or less than average computer skills, I do not believe that this will become prevalent.”

“Deep fakes are like the old pseudo images. Offenders sometimes take pics of family members placing them on porn images and reinforcing their cognitive distortions to reinforce their sexual attraction. This is the same here. There will be no doubt this will happen but in the paedophile world, REAL is key to success.”

Percentage who believe that deepfakes will become more common in the future: 

How common deepfakes are in investigations:

(Response by police officers who have encountered deepfakes)

Identifying victims is a challenge

The surveyed police officers point to a number of challenges that can arise if deepfakes become more prevalent in the future. One problem could be that any child could feature in the sexual abuse material, as long as the offender has had access to enough of images of that child (to create a deepfake a large number of images of a person taken from many angles is needed).

“All and everyone are now possible victims, videos from social media enable children to be victimized without having been abused.”

The current main challenge faced by police officers is, however, identifying victims. Secondly there is also a jurisdictional problem, as computer generated material is not illegal in all countries.

”Difficult to distinguish fake from true content. Is there a real victim/child or just manipulated content?”

”Victim identification will become harder since the body might not match the victim. Legal issues with what kind of damage has been done to the child.”

”The countries where the mandate requires a real child to have been abuse would not have an offence for this material.”

Comment on insight 6:

“The main challenge is victim
identification”

Christian Berg
Founder, NetClean

Read comment here

The main challenge is victim identification

Deepfakes is a technology that was more widely identified last Autumn, and discussed when media picked up on it in December 2017 and Spring this year. When I read that almost one out of five police officers have seen deepfakes in their investigations, I thought that was a proportionately large number, and that it looks as if the technology has become widely known quite fast.

Can be used to create material

This is not very surprising. We know that technologies are used to produce child sexual abuse material; a prime example is the role that the internet and darknet have played since their inception.

Deepfakes can be used to produce new online child sexual abuse material from already existing material. Using this technology, offenders can in addition produce material using images of children who have not been subjected to actual sexual assault, which means that in theory all children can become victims of this abuse.

Time-consuming to produce

However, the risks associated with the use of this technology is negated by the fact that it takes a lot of man-hours and commitment to produce this material. A large number of images of a person’s face are required to produce a deepfake. In addition, a technical know-how to find out how deepfakes are produced is necessary (even if the technology in itself does not require advanced technical knowledge).

Something that also works against an even wider use of deepfakes is the fact that child sexual abuse material is readily available online, removing the need for more material to be created, especially as offenders have a predilection towards material that is genuine, above other forms of online child sexual abuse.

However, there is no doubt that investigators will need to handle, if they are not already doing so, deepfakes as part of their investigations.

Victim Identification is a challenge

The biggest challenge with deepfakes is that they make it difficult to identify children, especially as the deepfake obscures the physically abused child’s real face. There is also a risk that investigators waste time looking for the wrong child, or for a child who in reality has not been sexually abused (even though one can argue that the film itself is sexual abuse).

In order to solve this problem investigators need ways to identify which films are deepfakes and the original film. In order to do so they need a good reference library of child sexual abuse films, which can be used to quickly determine if a film is “real” or not, if the material is already known and if the child in the film has been identified or not.

 


NetClean

NetClean is a world leading developer of technologies that protect IT environments from child sexual abuse material. Using hash technology child sexual abuse material is detected on work computers.