Trusted AI

Modern IT systems are characterized by their ever-increasing complexity. In order for IT security to keep up with this, automation needs to be further developed and completely rethought. Artificial intelligence (AI) methods are instrumental in this process and can support humans in analyzing and protecting security-critical systems. However, just like conventional IT systems, AI systems can be attacked. The main challenge here is to find and fix any vulnerabilities in the algorithms.

Kilian Tscharke

Anomaly Detection with Quantum Machine Learning – Identifying Cybersecurity Issues in Datasets

Since the release of ChatGPT, the popularity of Machine Learning (ML) has grown immensely. Besides Natural Language Processing (NLP) anomaly detection is an important branch of data analysis whose goal is to identify observations or events that deviate from the rest of the data. At Fraunhofer AISEC, cybersecurity experts explore Quantum Machine Learning methods for anomaly detection. One approach is based on the classification of quantum matter while a second method uses a type of Quantum Support Vector Machine with a kernel that is calculated on a quantum computer. This blog post explains the fundamentals of anomaly detection and shows the two approaches being pursued by the Quantum Security Technologies group at Fraunhofer AISEC.

Read More »
Claudia Eckert

ChatGPT — the hot new tool for hackers?

ChatGPT is the AI software that supposedly does it all: It’s expected to compose newspaper articles and write theses — or program malware. Is ChatGPT developing into a new tool for hackers and cyber criminals that makes it even easier for them to create malware? Institute director Prof. Dr. Claudia Eckert and AI expert Dr. Nicolas Müller give their opinion on the potential threat to digital security posed by ChatGPT.

Read More »
Nicolas Müller

AI – All that a machine learns is not gold

Machine learning is being hailed as the new savior. As the hype around artificial intelligence (AI) increases, trust is being placed in it to solve even the most complex of problems. Results from the lab back up these expectations. Detecting a Covid-19 infection using X-ray images or even speech, autonomous driving, automatic deepfake recognition — all of this is possible using AI under laboratory conditions. Yet when these models are applied in real life, the results are often less than adequate. Why is that? If machine learning is viable in the lab, why is it such a challenge to transfer it to real-life scenarios? And how can we build models that are more robust in the real world? This blog article scrutinizes scientific machine learning models and outlines possible ways of increasing the accuracy of AI in practice.

Read More »
Karla Pizzi

Putting AI systems to the test with ‘Creation Attacks’

How secure is artificial intelligence (AI)? Does a machine perceive its environment in a different way to humans? Can an algorithm’s assessment be trusted? These are some of the questions we are exploring in the project “SuKI — Security for and with artificial intelligence”. The more AI is integrated into our everyday lives, the more important these questions become: When it comes to critical decisions — be it on the roads, in the financial sector or even in the medical sector — which are taken by autonomous systems, being able to trust AI is vital. As part of our ongoing SuKI project, we have now successfully deceived the state-of-the-art object recognition system YoloV3 [0].

Read More »
Other Topics