Blogartikel ChatGPT Fraunhofer AISEC Claudia Eckert

ChatGPT — the hot new tool for hackers?

ChatGPT is the AI software that supposedly does it all: It's expected to compose newspaper articles and write theses — or program malware. Is ChatGPT developing into a new tool for hackers and cyber criminals that makes it even easier for them to create malware? Institute director Prof. Dr. Claudia Eckert and AI expert Dr. Nicolas Müller give their opinion on the potential threat to digital security posed by ChatGPT.

Security experts have already demonstrated that ChatGPT can be used to create malware, or for social engineering. Will the bot become the hot new tool for hackers with little technical know-how?

Anybody can use ChatGPT to automatically generate texts or simple programs, and hackers are no exception: they can use this AI-based software to create malicious code, for example. While we’re not yet sure how good any future generated programs will be, simple versions to automatically create phishing emails and codes for carrying out ransomware attacks have already been detected. In fact, easy-to-use options have been around for a long time, enabling hackers without any prior knowledge to carry out attacks. However, these aren’t based on AI and tend to be available online as collections of executable attack programs, so-called exploits, that exploit known weaknesses. Now, ChatGPT is another convenient tool that hackers can use to generate and spread their own malware. Fraunhofer AISEC views ChatGPT as a serious threat to cyber security. We expect the knowledge base of future software versions to expand considerably, which will improve the quality of answers. Such a development is easy to foresee, considering that the underlying technology is based on re-enforcement learning combined with human feedback. This makes it vital to close any potential security gaps and eliminate all weaknesses to counter such attacks.

Is ChatGPT only interesting for script kiddies or also for more experienced cyber criminals?

Hackers need skills from a wide variety of fields to launch successful attacks. In my view, ChatGPT could already be of interest to IT experts today. The chatbot’s communication in the form of a dialog and its ability to provide explanations, create code snippets or describe commands that can be used for tasks (e.g., when queried about the correct parameterization of analysis tools) can provide valuable support even to experts. ChatGPT can produce relevant answers and results faster than a classic Google query, which doesn’t generate code snippets tailored to the query, for example. Experts could therefore benefit by expanding their know-how faster with ChatGPT — assuming that they’re able to quickly check the chatbot’s replies for plausibility and correctness.

Aren’t there already many easy ways to get malicious code, with a simple click on the darknet, for example (“Malware as a Service”)? Is ChatGPT just another option or is the bot different from the existing options for hackers?

As mentioned above, ChatGPT is a further tool in the already existing toolkit for hackers. In my view, ChatGPT could take on the role of a virtual consultant that can advise on the most diverse queries to prepare against hacker attacks, at least to some extent. However, the potential threat this type of software can pose in the long term is much more critical. Some already call it a game changer software for cyber security. While ChatGPT has a set of internal rules that prevent it from generating attack code if asked directly, this can of course be bypassed by formulating questions in a smart way. ChatGPT has the potential to make the world of cyber attacks accessible to an even wider range of users, to enable dedicated creation of numerous targeted attacks and, what’s more, advise non-savvy hackers on how to carry them out successfully.

Do we have to expect cyber attacks to be controlled by AI in the near future — from malware creation to distribution? Is this already happening today?

Yes, we believe that simple attack waves, such as phishing campaigns, can already be created and carried out using AI. For example, AI can be used to generate phishing emails that contain a link hiding AI-based ransomware code. These mails can be distributed automatically to selected groups of recipients. Attacks of this type belong to the large category of social engineering attacks, which will be even more effective in the future based on AI. The AI software generates authentic, convincing-looking texts that trick victims into disclosing sensitive information. We shouldn’t forget, however, that the underlying technology (language model) is very good at completing sentences, but, unlike humans, it cannot combine complex backgrounds and prior knowledge from diverse fields and put them into context. While ChatGPT’s answers to questions often sound plausible, they aren’t actually based on human understanding but instead on statistical distributions of word contexts.

Are there any positive aspects of ChatGPT for the security industry? Can security experts also use the bot for their work?

Security experts can indeed benefit from ChatGPT, e.g., to detect weaknesses in software. ChatGPT can also be of assistance to software developers. For example, ChatGPT could provide automated analysis of code fragments and helpful information on how to improve the code quality in the development cycle. This would reduce weaknesses in the software that could potentially be attacked. ChatGPT could also contribute to employee qualification. Whatever the field of application, it is important to be aware that ChatGPT often provides wrong or plainly made-up answers. This is the case right now and will also apply in the future. We therefore have to consider both the risks and the opportunities provided by ChatGPT while keeping its inherent limits in mind.

How will the technology and generative AI develop as a whole?

We’re observing ever faster developments in the field of generative AI, with news and updated research results on a daily basis. Any statements and prognoses made in this interview must be viewed in the light of these developments. They are a momentary view at a time where we have only glimpsed the many opportunities and risks of this technology.

For example, an article published by Microsoft Research in March 2023 announced the first signs of Artificial General Intelligence (AGI) [1], i.e., a program capable of understanding and learning complex intellectual tasks. We also see fast adaptation of generative AI by many technology providers. This will accelerate the already fast-paced dynamic in development, research and the many different applications and open up new, as yet unknown markets. One thing is for sure, though: While generative AI will have a large impact on all areas of our society, it will also have a crucial impact on the development of future security technologies. Finally, there is only one prognosis that we’re certain of: Fraunhofer AISEC will continue to keep a close eye on AI-based security and safe AI and actively shape future developments.

This interview was published in German on the Bayern Innovativ website in February 2023. Due to the rapid development of AI technologies, we added the last question in April 2023. Here is the link to the original publication in German: https://www.bayern-innovativ.de/de/netzwerke-und-thinknet/uebersicht-digitalisierung/cybersecurity/seite/chatgpt-neues-lieblingstool-fuer-hacker

[1] https://arxiv.org/abs/2303.12712

Authors
Prof. Dr. Claudia Eckert
Claudia Eckert

Prof. Dr. Claudia Eckert is managing director of the Fraunhofer Institute for Applied and Integrated Security AISEC in Munich and professor at the Technical University of Munich, where she holds the Chair for IT Security at the department of Informatics. As a member of various national and international industrial advisory boards and scientific committees, she advises companies, trade associations and the public sector on all issues relating to IT security.

muller_nicolas_0185_rund
Nicolas Müller

Dr. Nicolas Müller studied mathematics, computer science and theology to state examination level at the University of Freiburg, graduating with distinction in 2017. Since 2017, he has been a research scientist in the Cognitive Security Technologies department of Fraunhofer AISEC. His research focuses on the reliability of AI models, ML shortcuts and audio deepfakes.

Most Popular

Never want to miss a post?

Please submit your e-mail address to be notified about new blog posts.
 
Bitte füllen Sie das Pflichtfeld aus.
Bitte füllen Sie das Pflichtfeld aus.
Bitte füllen Sie das Pflichtfeld aus.

* Mandatory

* Mandatory

By filling out the form you accept our privacy policy.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other Articles

Headerbild zum Blogartikel "Neue Studie zu Laser-basiertem Fehlerangriff auf XMSS" im Cybersecurityblog des Fraunhofer AISEC

Fraunhofer AISEC commissioned by the German Federal Office for Information Security (BSI): new study of laser-based fault attacks on XMSS

To ensure the security of embedded systems, the integrity and authenticity of the software must be verified, for example through signatures. However, targeted hardware attacks enable malware to be used to take over the system. What risks are modern cryptographic implementations exposed to? What countermeasures need to be taken? To answer these questions, Fraunhofer AISEC was commissioned by the German Federal Office for Information Security (BSI) to carry out a study of laser-based fault attacks on XMSS. The focus is on a hash-based, quantum-secure scheme for creating and verifying signatures based on the Winternitz One-Time-Signature (WOTS) scheme.

Read More »

Anomaly Detection with Quantum Machine Learning – Identifying Cybersecurity Issues in Datasets

Since the release of ChatGPT, the popularity of Machine Learning (ML) has grown immensely. Besides Natural Language Processing (NLP) anomaly detection is an important branch of data analysis whose goal is to identify observations or events that deviate from the rest of the data. At Fraunhofer AISEC, cybersecurity experts explore Quantum Machine Learning methods for anomaly detection. One approach is based on the classification of quantum matter while a second method uses a type of Quantum Support Vector Machine with a kernel that is calculated on a quantum computer. This blog post explains the fundamentals of anomaly detection and shows the two approaches being pursued by the Quantum Security Technologies group at Fraunhofer AISEC.

Read More »

Towards Automated Cloud Security Certification

Obtaining a cloud security certification requires a lot of preparation time, which mainly involves manual processes that are prone to error. In other words, several employees cannot perform their usual duties during an audit preparation. Our Clouditor tool aims to improve this process by making audit preparations more systematic and automatable. This makes it possible to continuously monitor cloud services and check their compliance with a cloud security catalog such as BSI C5[1], EUCS[2], or the CCM[3].

Read More »

gallia – An Extendable Pentesting Framework

gallia is an extendable pentesting framework with the focus on the automotive domain, developed by Fraunhofer AISEC under the Apache 2.0 license. The scope of the toolchain is conducting penetration tests from a single ECU up to whole cars. Currently, the main focus lies on the UDS interface but is not limited to it. Acting as a generic interface, the logging functionality implements reproducible tests and enables post-processing tasks.
The following blog post introduces gallia’s architecture, its plugin interface, and its intended use case. The post covers the interaction between its components and shows how gallia can be extended for other use cases.

Read More »