Putting AI systems to the test with ‘Creation Attacks’

How secure is artificial intelligence (AI)? Does a machine perceive its environment in a different way to humans? Can an algorithm's assessment be trusted? These are some of the questions we are exploring in the project “SuKI — Security for and with artificial intelligence”. The more AI is integrated into our everyday lives, the more important these questions become: When it comes to critical decisions — be it on the roads, in the financial sector or even in the medical sector — which are taken by autonomous systems, being able to trust AI is vital. As part of our ongoing SuKI project, we have now successfully deceived the state-of-the-art object recognition system YoloV3 [0].

A colorful Fraunhofer logo on a smartphone is classified as a car (see Figure 1). Our attack demonstrates just how easy it is to manipulate AI-based recognition methods into detecting objects where none exist. This deception could have significant consequences, for example, on the road. These so-called “creation attacks” work with any objects and can be hidden from human view. We are already applying our findings within the Fraunhofer Cluster of Excellence Cognitive Internet Technologies CCIT in the SmartIO project aimed at improving intelligent road intersections.

Fig. 1: A colorful Fraunhofer logo on a smartphone is classified as a car.

Attacks from both the inside and the outside

What was the team’s approach? We have been working on so-called “adversarial examples” for several years now. Adversarial examples are preprocessed images or audio files with particular characteristics that confuse AI algorithms into making incorrect classifications. In most cases, the human eye does not recognize these adjustments to the image and audio files — the AI, however, identifies exactly what the attackers want it to detect.

Originally, these attacks were only possible if the input, such as the modified image, was fed directly into the computer. However, these new attacks are even more realistic as they can be carried out remotely, i.e., using images from a camera. Now we have developed a more refined version of this type of attack, known technically as a “physical adversarial attack”. We have drawn on the work of international researchers [1,2,3] to develop an approach that improves the robustness of the attacks.

Fig. 2: A so-called ‘Adversarial Example’ leads AI algorithms to wrong classifications.

Algorithm analyzes the neural network

We use a technical algorithm that accurately analyzes the neural network that performs object recognition. Gradient-based methods are used to understand how the input will need to be adjusted in order to fool the neural network. During the optimization process, we created an image that the AI considers to be from the target object’s classification. This results in high detection rates, even if no object is physically present. In order to make the attacks invisible to the human eye, we also considered that manipulations could only occur in a certain area, i.e., in the example shown, only the area around the Fraunhofer logo. It also works in the “real world” as we simulated external changes such as light penetration and color changes when generating the attack pattern. We refreshed the input repeatedly during the generation process: At each step, the attack was tweaked so that it retained its function in different scenarios and under a variety of transformations.

The same procedure works for other target objects: For example, the colorful logo can be transformed into a pedestrian or a street sign instead of a car.

But why is this relevant? Attacks of this type can pose potentially critical safety implications for autonomous vehicles, for example: Attackers can control the driving behavior of another car if they use the image on a smartphone to deceive the object recognition system in the vehicle.

Identifying research requirements and fighting off attacks

We want to use our concrete application example to draw attention to the need for further research into these technologies. The SuKI research project is also investigating other fields of application in addition to those in autonomous vehicles. For example, the project is also focusing on the use of artificial intelligence in speech recognition systems or in the area of access control. More specifically, we are looking at how these kinds of attacks can be blocked by making AI processes resistant to them. The corresponding research results have already been presented at high-level conferences [4-7].

[0] REDMON, Joseph; FARHADI, Ali. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.

[1] CHEN, Shang-Tse, et al. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Cham, 2018. S. 52-68.

[2] ATHALYE, Anish, et al. Synthesizing robust adversarial examples. In: International conference on machine learning. ICML, 2018. S. 284-293.

[3] CARLINI, Nicholas; WAGNER, David. Towards evaluating the robustness of neural networks. In: 2017 ieee symposium on security and privacy (sp). IEEE, 2017. S. 39-57.

[4] SCHULZE, Jan-Philipp; SPERL, Philip; BÖTTINGER, Konstantin. DA3G: Detecting Adversarial Attacks by Analysing Gradients. In: European Symposium on Research in Computer Security. Springer, Cham, 2021. S. 563-583.

[5] SPERL, Philip; BÖTTINGER, Konstantin. Optimizing Information Loss Towards Robust Neural Networks. arXiv preprint arXiv:2008.03072, 2020.

[6] DÖRR, Tom, et al. Towards resistant audio adversarial examples. In: Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence. 2020. S. 3-10.

[7] SPERL, Philip, et al. DLA: dense-layer-analysis for adversarial example detection. In: 2020 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2020. S. 198-215a

Additional Information
Authors
Karla_Markert2
Karla Pizzi

Karla Pizzi has been working as a research fellow at Fraunhofer AISEC since 2018 following her studies in mathematics, political science and computer science.

She is currently working in the Cognitive Security Technologies research department, with a particular focus on adversarial examples to protect artificial intelligence systems from being manipulated.

Jan-Philipp_Schulze_2
Jan-Philipp Schulze

Jan-Philipp Schulze joined Fraunhofer AISEC as a research fellow in January 2019 after having studied electrical engineering and information technology at ETH Zurich. He is also completing his PhD in computer science at TU Munich. His research work in the Cognitive Security Technologies research department focuses on anomaly detection and adversarial machine learning.

Most Popular

Never want to miss a post?

Please submit your e-mail address to be notified about new blog posts.
 
Bitte füllen Sie das Pflichtfeld aus.
Bitte füllen Sie das Pflichtfeld aus.
Bitte füllen Sie das Pflichtfeld aus.

* Mandatory

* Mandatory

By filling out the form you accept our privacy policy.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other Articles

So you want to play with Wi-Fi? It’s dangerous to make frames alone. Take this.

While Wi-Fi communication encryption faces much scrutiny, programming errors in drivers and firmware of embedded devices lack third-party pentesting. In this blog article, our Embedded Security expert Katharina Bogad provides insights in automatic (fuzz) testing of 802.11 firmware and drivers, explains why it is necessary to arbitrarily alter a wireless connection and explores the hardware and software requirements to do so. Further she discusses how to use the monitor mode for passive listening and frame injection and closes with a section of assorted pitfalls.

Read More »

AI – All that a machine learns is not gold

Machine learning is being hailed as the new savior. As the hype around artificial intelligence (AI) increases, trust is being placed in it to solve even the most complex of problems. Results from the lab back up these expectations. Detecting a Covid-19 infection using X-ray images or even speech, autonomous driving, automatic deepfake recognition — all of this is possible using AI under laboratory conditions. Yet when these models are applied in real life, the results are often less than adequate. Why is that? If machine learning is viable in the lab, why is it such a challenge to transfer it to real-life scenarios? And how can we build models that are more robust in the real world? This blog article scrutinizes scientific machine learning models and outlines possible ways of increasing the accuracy of AI in practice.

Read More »

Digital twins and their potential for OT security

A digital twin is a virtual representation of a real system or device. It accompanies its physical counterpart during its entire life cycle. Tests, optimization procedures and bug hunting can be carried out on the twin first without involving the real device (that may not even exist at that moment). In this article, I want to give you some recommendations on how to harness that potential for improving upon the state of OT security (Operational Technology Security), e.g., within manufacturing or building automation.

Read More »

Post-quantum cryptography in practice

The threat posed by quantum computers to the asymmetric cryptography in use today has been well known in the scientific community for more than 25 years, since Peter Shor published a polynomial algorithm for prime factorization to solve the discrete logarithm on a quantum computer. In recent years, crypto experts have increasingly been warning of the progress that is being made in quantum computing and its relevance for cryptography.

Research on post-quantum cryptography (PQC) at the Fraunhofer Institute for Applied and Integrated Security AISEC aims to enable businesses, government bodies and citizens to continue to have access to usable cryptography that will remain secure in the long term so they can keep their data secure. This blog article provides a brief overview of four ongoing projects.

Read More »