How secure is artificial intelligence (AI)? Does a machine perceive its environment in a different way to humans? Can an algorithm’s assessment be trusted? These are some of the questions we are exploring in the project “SuKI — Security for and with artificial intelligence”. The more AI is integrated into our everyday lives, the more important these questions become: When it comes to critical decisions — be it on the roads, in the financial sector or even in the medical sector — which are taken by autonomous systems, being able to trust AI is vital. As part of our ongoing SuKI project, we have now successfully deceived the state-of-the-art object recognition system YoloV3 .
Modern IT systems are characterized by their ever-increasing complexity. In order for IT security to keep up with this, automation needs to be further developed and completely rethought. Artificial intelligence (AI) methods are instrumental in this process and can support humans in analyzing and protecting security-critical systems. However, just like conventional IT systems, AI systems can be attacked. The main challenge here is to find and fix any vulnerabilities in the algorithms.