Putting AI systems to the test with ‘Creation Attacks’

How secure is artificial intelligence (AI)? Does a machine perceive its environment in a different way to humans? Can an algorithm's assessment be trusted? These are some of the questions we are exploring in the project “SuKI — Security for and with artificial intelligence”. The more AI is integrated into our everyday lives, the more important these questions become: When it comes to critical decisions — be it on the roads, in the financial sector or even in the medical sector — which are taken by autonomous systems, being able to trust AI is vital. As part of our ongoing SuKI project, we have now successfully deceived the state-of-the-art object recognition system YoloV3 [0].

A colorful Fraunhofer logo on a smartphone is classified as a car (see Figure 1). Our attack demonstrates just how easy it is to manipulate AI-based recognition methods into detecting objects where none exist. This deception could have significant consequences, for example, on the road. These so-called “creation attacks” work with any objects and can be hidden from human view. We are already applying our findings within the Fraunhofer Cluster of Excellence Cognitive Internet Technologies CCIT in the SmartIO project aimed at improving intelligent road intersections.

Fig. 1: A colorful Fraunhofer logo on a smartphone is classified as a car.

Attacks from both the inside and the outside

What was the team’s approach? We have been working on so-called “adversarial examples” for several years now. Adversarial examples are preprocessed images or audio files with particular characteristics that confuse AI algorithms into making incorrect classifications. In most cases, the human eye does not recognize these adjustments to the image and audio files — the AI, however, identifies exactly what the attackers want it to detect.

Originally, these attacks were only possible if the input, such as the modified image, was fed directly into the computer. However, these new attacks are even more realistic as they can be carried out remotely, i.e., using images from a camera. Now we have developed a more refined version of this type of attack, known technically as a “physical adversarial attack”. We have drawn on the work of international researchers [1,2,3] to develop an approach that improves the robustness of the attacks.

Fig. 2: A so-called ‘Adversarial Example’ leads AI algorithms to wrong classifications.

Algorithm analyzes the neural network

We use a technical algorithm that accurately analyzes the neural network that performs object recognition. Gradient-based methods are used to understand how the input will need to be adjusted in order to fool the neural network. During the optimization process, we created an image that the AI considers to be from the target object’s classification. This results in high detection rates, even if no object is physically present. In order to make the attacks invisible to the human eye, we also considered that manipulations could only occur in a certain area, i.e., in the example shown, only the area around the Fraunhofer logo. It also works in the “real world” as we simulated external changes such as light penetration and color changes when generating the attack pattern. We refreshed the input repeatedly during the generation process: At each step, the attack was tweaked so that it retained its function in different scenarios and under a variety of transformations.

The same procedure works for other target objects: For example, the colorful logo can be transformed into a pedestrian or a street sign instead of a car.

But why is this relevant? Attacks of this type can pose potentially critical safety implications for autonomous vehicles, for example: Attackers can control the driving behavior of another car if they use the image on a smartphone to deceive the object recognition system in the vehicle.

Identifying research requirements and fighting off attacks

We want to use our concrete application example to draw attention to the need for further research into these technologies. The SuKI research project is also investigating other fields of application in addition to those in autonomous vehicles. For example, the project is also focusing on the use of artificial intelligence in speech recognition systems or in the area of access control. More specifically, we are looking at how these kinds of attacks can be blocked by making AI processes resistant to them. The corresponding research results have already been presented at high-level conferences [4-7].

[0] REDMON, Joseph; FARHADI, Ali. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.

[1] CHEN, Shang-Tse, et al. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Cham, 2018. S. 52-68.

[2] ATHALYE, Anish, et al. Synthesizing robust adversarial examples. In: International conference on machine learning. ICML, 2018. S. 284-293.

[3] CARLINI, Nicholas; WAGNER, David. Towards evaluating the robustness of neural networks. In: 2017 ieee symposium on security and privacy (sp). IEEE, 2017. S. 39-57.

[4] SCHULZE, Jan-Philipp; SPERL, Philip; BÖTTINGER, Konstantin. DA3G: Detecting Adversarial Attacks by Analysing Gradients. In: European Symposium on Research in Computer Security. Springer, Cham, 2021. S. 563-583.

[5] SPERL, Philip; BÖTTINGER, Konstantin. Optimizing Information Loss Towards Robust Neural Networks. arXiv preprint arXiv:2008.03072, 2020.

[6] DÖRR, Tom, et al. Towards resistant audio adversarial examples. In: Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence. 2020. S. 3-10.

[7] SPERL, Philip, et al. DLA: dense-layer-analysis for adversarial example detection. In: 2020 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2020. S. 198-215a

Additional Information
Authors
Karla_Markert2
Karla Pizzi

Karla Pizzi has been working as a research fellow at Fraunhofer AISEC since 2018 following her studies in mathematics, political science and computer science.

She is currently working in the Cognitive Security Technologies research department, with a particular focus on adversarial examples to protect artificial intelligence systems from being manipulated.

Jan-Philipp_Schulze_2
Jan-Philipp Schulze

Jan-Philipp Schulze joined Fraunhofer AISEC as a research fellow in January 2019 after having studied electrical engineering and information technology at ETH Zurich. He is also completing his PhD in computer science at TU Munich. His research work in the Cognitive Security Technologies research department focuses on anomaly detection and adversarial machine learning.

Most Popular

Never want to miss a post?

Please submit your e-mail address to be notified about new blog posts.
 
Bitte füllen Sie das Pflichtfeld aus.
Bitte füllen Sie das Pflichtfeld aus.
Bitte füllen Sie das Pflichtfeld aus.

* Mandatory

* Mandatory

By filling out the form you accept our privacy policy.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other Articles

gallia – An Extendable Pentesting Framework

gallia is an extendable pentesting framework with the focus on the automotive domain, developed by Fraunhofer AISEC under the Apache 2.0 license. The scope of the toolchain is conducting penetration tests from a single ECU up to whole cars. Currently, the main focus lies on the UDS interface but is not limited to it. Acting as a generic interface, the logging functionality implements reproducible tests and enables post-processing tasks.
The following blog post introduces gallia’s architecture, its plugin interface, and its intended use case. The post covers the interaction between its components and shows how gallia can be extended for other use cases.

Read More »

Android App Link Risks

Android App Links enable linking web content to mobile apps. The provided systems have been shown to have several issues, discovered by Tang et al. back in 2020, primarily link hijacking by three different means. Throughout the years there has been little information on the state of these issues, whether they were fixed and when. This post aims to provide information on exactly that.

Read More »

A (somewhat) gentle introduction to lattice-based post-quantum cryptography

In recent years, significant progress in researching and building quantum computers has been made. A fully-fledged quantum computer would be able to efficiently solve a distinct set of mathematical problems like integer factorization and the discrete logarithm, which are the basis for a wide range of cryptographic schemes. In 2016, NIST announced an open competition with the goal of finding and standardizing suitable algorithms for quantum-resistant cryptography. The standardization effort by NIST is aimed at post-quantum secure KEMs and digital signatures. In this article, two of the to-be-standardized algorithms, Kyber and Dilithium, are presented and some of their mathematical details are outlined. Both algorithms are based on so-called lattices and the thereupon constructed »Learning with Errors«, which we will get to know in the following.

Read More »

ChatGPT — the hot new tool for hackers?

ChatGPT is the AI software that supposedly does it all: It’s expected to compose newspaper articles and write theses — or program malware. Is ChatGPT developing into a new tool for hackers and cyber criminals that makes it even easier for them to create malware? Institute director Prof. Dr. Claudia Eckert and AI expert Dr. Nicolas Müller give their opinion on the potential threat to digital security posed by ChatGPT.

Read More »