ChatGPT — the hot new tool for hackers?

ChatGPT is the AI software that supposedly does it all: It's expected to compose newspaper articles and write theses — or program malware. Is ChatGPT developing into a new tool for hackers and cyber criminals that makes it even easier for them to create malware? Institute director Prof. Dr. Claudia Eckert and AI expert Dr. Nicolas Müller give their opinion on the potential threat to digital security posed by ChatGPT.

Security experts have already demonstrated that ChatGPT can be used to create malware, or for social engineering. Will the bot become the hot new tool for hackers with little technical know-how?

Anybody can use ChatGPT to automatically generate texts or simple programs, and hackers are no exception: they can use this AI-based software to create malicious code, for example. While we’re not yet sure how good any future generated programs will be, simple versions to automatically create phishing emails and codes for carrying out ransomware attacks have already been detected. In fact, easy-to-use options have been around for a long time, enabling hackers without any prior knowledge to carry out attacks. However, these aren’t based on AI and tend to be available online as collections of executable attack programs, so-called exploits, that exploit known weaknesses. Now, ChatGPT is another convenient tool that hackers can use to generate and spread their own malware. Fraunhofer AISEC views ChatGPT as a serious threat to cyber security. We expect the knowledge base of future software versions to expand considerably, which will improve the quality of answers. Such a development is easy to foresee, considering that the underlying technology is based on re-enforcement learning combined with human feedback. This makes it vital to close any potential security gaps and eliminate all weaknesses to counter such attacks.

Is ChatGPT only interesting for script kiddies or also for more experienced cyber criminals?

Hackers need skills from a wide variety of fields to launch successful attacks. In my view, ChatGPT could already be of interest to IT experts today. The chatbot’s communication in the form of a dialog and its ability to provide explanations, create code snippets or describe commands that can be used for tasks (e.g., when queried about the correct parameterization of analysis tools) can provide valuable support even to experts. ChatGPT can produce relevant answers and results faster than a classic Google query, which doesn’t generate code snippets tailored to the query, for example. Experts could therefore benefit by expanding their know-how faster with ChatGPT — assuming that they’re able to quickly check the chatbot’s replies for plausibility and correctness.

Aren’t there already many easy ways to get malicious code, with a simple click on the darknet, for example (“Malware as a Service”)? Is ChatGPT just another option or is the bot different from the existing options for hackers?

As mentioned above, ChatGPT is a further tool in the already existing toolkit for hackers. In my view, ChatGPT could take on the role of a virtual consultant that can advise on the most diverse queries to prepare against hacker attacks, at least to some extent. However, the potential threat this type of software can pose in the long term is much more critical. Some already call it a game changer software for cyber security. While ChatGPT has a set of internal rules that prevent it from generating attack code if asked directly, this can of course be bypassed by formulating questions in a smart way. ChatGPT has the potential to make the world of cyber attacks accessible to an even wider range of users, to enable dedicated creation of numerous targeted attacks and, what’s more, advise non-savvy hackers on how to carry them out successfully.

Do we have to expect cyber attacks to be controlled by AI in the near future — from malware creation to distribution? Is this already happening today?

Yes, we believe that simple attack waves, such as phishing campaigns, can already be created and carried out using AI. For example, AI can be used to generate phishing emails that contain a link hiding AI-based ransomware code. These mails can be distributed automatically to selected groups of recipients. Attacks of this type belong to the large category of social engineering attacks, which will be even more effective in the future based on AI. The AI software generates authentic, convincing-looking texts that trick victims into disclosing sensitive information. We shouldn’t forget, however, that the underlying technology (language model) is very good at completing sentences, but, unlike humans, it cannot combine complex backgrounds and prior knowledge from diverse fields and put them into context. While ChatGPT’s answers to questions often sound plausible, they aren’t actually based on human understanding but instead on statistical distributions of word contexts.

Are there any positive aspects of ChatGPT for the security industry? Can security experts also use the bot for their work?

Security experts can indeed benefit from ChatGPT, e.g., to detect weaknesses in software. ChatGPT can also be of assistance to software developers. For example, ChatGPT could provide automated analysis of code fragments and helpful information on how to improve the code quality in the development cycle. This would reduce weaknesses in the software that could potentially be attacked. ChatGPT could also contribute to employee qualification. Whatever the field of application, it is important to be aware that ChatGPT often provides wrong or plainly made-up answers. This is the case right now and will also apply in the future. We therefore have to consider both the risks and the opportunities provided by ChatGPT while keeping its inherent limits in mind.

How will the technology and generative AI develop as a whole?

We’re observing ever faster developments in the field of generative AI, with news and updated research results on a daily basis. Any statements and prognoses made in this interview must be viewed in the light of these developments. They are a momentary view at a time where we have only glimpsed the many opportunities and risks of this technology.

For example, an article published by Microsoft Research in March 2023 announced the first signs of Artificial General Intelligence (AGI) [1], i.e., a program capable of understanding and learning complex intellectual tasks. We also see fast adaptation of generative AI by many technology providers. This will accelerate the already fast-paced dynamic in development, research and the many different applications and open up new, as yet unknown markets. One thing is for sure, though: While generative AI will have a large impact on all areas of our society, it will also have a crucial impact on the development of future security technologies. Finally, there is only one prognosis that we’re certain of: Fraunhofer AISEC will continue to keep a close eye on AI-based security and safe AI and actively shape future developments.

This interview was published in German on the Bayern Innovativ website in February 2023. Due to the rapid development of AI technologies, we added the last question in April 2023. Here is the link to the original publication in German:


Prof. Dr. Claudia Eckert
Claudia Eckert

Prof. Dr. Claudia Eckert is managing director of the Fraunhofer Institute for Applied and Integrated Security AISEC in Munich and professor at the Technical University of Munich, where she holds the Chair for IT Security at the department of Informatics. As a member of various national and international industrial advisory boards and scientific committees, she advises companies, trade associations and the public sector on all issues relating to IT security.

Nicolas Müller

Dr. Nicolas Müller studied mathematics, computer science and theology to state examination level at the University of Freiburg, graduating with distinction in 2017. Since 2017, he has been a research scientist in the Cognitive Security Technologies department of Fraunhofer AISEC. His research focuses on the reliability of AI models, ML shortcuts and audio deepfakes.

Most Popular

Never want to miss a post?

Please submit your e-mail address to be notified about new blog posts.
Bitte füllen Sie das Pflichtfeld aus.
Bitte füllen Sie das Pflichtfeld aus.
Bitte füllen Sie das Pflichtfeld aus.

* Mandatory

* Mandatory

By filling out the form you accept our privacy policy.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other Articles

ChatGPT — the hot new tool for hackers?

ChatGPT is the AI software that supposedly does it all: It’s expected to compose newspaper articles and write theses — or program malware. Is ChatGPT developing into a new tool for hackers and cyber criminals that makes it even easier for them to create malware? Institute director Prof. Dr. Claudia Eckert and AI expert Dr. Nicolas Müller give their opinion on the potential threat to digital security posed by ChatGPT.

Read More »

So you want to play with Wi-Fi? It’s dangerous to make frames alone. Take this.

While Wi-Fi communication encryption faces much scrutiny, programming errors in drivers and firmware of embedded devices lack third-party pentesting. In this blog article, our Embedded Security expert Katharina Bogad provides insights in automatic (fuzz) testing of 802.11 firmware and drivers, explains why it is necessary to arbitrarily alter a wireless connection and explores the hardware and software requirements to do so. Further she discusses how to use the monitor mode for passive listening and frame injection and closes with a section of assorted pitfalls.

Read More »

AI – All that a machine learns is not gold

Machine learning is being hailed as the new savior. As the hype around artificial intelligence (AI) increases, trust is being placed in it to solve even the most complex of problems. Results from the lab back up these expectations. Detecting a Covid-19 infection using X-ray images or even speech, autonomous driving, automatic deepfake recognition — all of this is possible using AI under laboratory conditions. Yet when these models are applied in real life, the results are often less than adequate. Why is that? If machine learning is viable in the lab, why is it such a challenge to transfer it to real-life scenarios? And how can we build models that are more robust in the real world? This blog article scrutinizes scientific machine learning models and outlines possible ways of increasing the accuracy of AI in practice.

Read More »

Digital twins and their potential for OT security

A digital twin is a virtual representation of a real system or device. It accompanies its physical counterpart during its entire life cycle. Tests, optimization procedures and bug hunting can be carried out on the twin first without involving the real device (that may not even exist at that moment). In this article, I want to give you some recommendations on how to harness that potential for improving upon the state of OT security (Operational Technology Security), e.g., within manufacturing or building automation.

Read More »