Technologies to stop CSAM: Artificial Intelligence.

In a series of articles we look at some of the technologies that are used to stop child sexual abuse material today. Some are used in NetClean’s products, and some are used by law enforcement and NGOs to find and remove material online. Here we look at Artificial Intelligence.

AI is a rapidly developing technology, which shows a lot of promise for investigative work. Unlike hashing technologies (see previous articles), AI classifiers have the potential to recognize new and previously unclassified child sexual abuse material (CSAM).

Artificial neural networks (ANN), are based on efforts to model information processing in the human brain. The saying is that AI is learning without instruction or programming. These are adaptive systems that change based on external or internal information that flows through the network, i.e. the system is learning, and the networks infer functions from this learning. This is important in systems where the complexity of the data or task makes the design or function by hand difficult.

ANN learn and adapt through assessing data, and to draw the right conclusions they must train on high volumes of quality data. This is because an AI application is only as good as the data on which it has trained. If the data is flawed the system will draw the wrong conclusions and become inefficient, or unhelpful. Therefore, it is crucial to have high-quality data and to structure the training in such a way that the system draws the right conclusions.

AI classifiers are being developed at speed to assist with law enforcement investigations, and aside from law enforcement, industry is also developing AI applications to detect CSAM. One example is Google’s AI classifier that can be used to detect CSAM in networks, services and on platforms.

There is also a clear case for business and organizations to use this technology. When a child sexual abuse image is detected in an IT environment, other files on the device can be searched with the help of an AI classifier. It is also possible to schedule these types of searches in the IT environment.

As mentioned above, AI classifiers are unique in that they can detect previously unknown material. This increases the scope for detection, identification and safeguarding of previously unknown victims. The technology is developing fast and will continue to revolutionize the fight against online child sexual abuse exploitation.

Still, an AI classifier is not a hundred percent reliable and will make mistakes. It is, in essence, only as good as the data that it has been provided with. Therefore, it still relies heavily on human verification to ensure that the classification is right. And, when things go wrong, it is usually very difficult, if not impossible, to backtrack why it has made a particular mistake.

Still, for its limitations, AI shows great future potential for finding, analyzing and removing child sexual abuse material online. Read next week’s article to learn about keyword matching.