ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot developed by OpenAI based on the GPT-3.5 large language model. It has a remarkable ability to interact in conversational dialogue form and can provide responses that appear surprisingly human — generating massive excitement from users, who are leveraging its capabilities in a range of new ways that are certain to have a big impact on the future. This blog post will examine the many uses of ChatGPT and how businesses can adapt to the rapid changes it signals in the use of AI.
Large language models like those used in ChatGPT perform the task of predicting the next word in a series of words. As a complex machine learning model, ChatGPT is able to carry out natural language generation (NLG) tasks with such a high level of accuracy that the model can pass a Turing Test, which measures a machine’s ability to demonstrate intelligent behavior indistinguishable from human behavior. During its training phase, ChatGPT was trained on a massive amount of unlabeled data scraped from the Internet before 2022. It is constantly monitored and fine-tuned with additional datasets labeled by humans. This process is called reinforcement learning from human feedback (RLHF): an additional layer of training that uses human feedback to help ChatGPT learn the ability to follow directions and generate responses that are satisfactory to humans.
Although ChatGPT was just released in November 2022, the Internet has been buzzing with users who have flooded ChatGPT with all kinds of questions on a variety of topics. The demand is so great that as of the writing of this blog, ChatGPT is at capacity and currently unavailable.
Users have overwhelmed the service to gather insight on topics such as:
OpenAI’s artificially intelligent chatbot ChatGPT is being used by cybercriminals to quickly build hacking tools. Scammers are also testing ChatGPT’s ability to impersonate young females as a tool for ensnaring victims.
Users of ChatGPT raised the alarm shortly after its launch in late November that the app had the potential to create ransomware or code malicious software capable of spying on users’ keyboard strokes. Now, it should be stated that ChatGPT has some basic filtering of inputs. For example, ChatGPT will not produce any results if you ask direct questions with clear malicious intent, as shown below.
Unfortunately, attackers and researchers have already found workarounds — in some cases, simply by rephrasing questions, as the screenshots below illustrate.
A few examples of hacking while using ChatGPT could include anything from creating a scraping bot to scripting complex malware and XSS attacks to writing a phishing email. An example of a phishing email generated by ChatGPT is shown in the example below.
Several decades ago, attackers needed to understand computer and network architecture as well as software engineering principles. Writing and deploying viruses and other kinds of malware was not a trivial effort. And frankly, since the world wasn’t as connected, the potential targets and payback weren’t as attractive. But as time went on, and the world continued to digitalize and connect, more valuable data and information was placed in the digital realm.
To gain access to these assets, attackers upped their game by creating standardized malware kits, which they monetized by creating popular malware kits that could be sold for hundreds or thousands of dollars. Now, all an attacker needs to do to launch a successful attack is acquire a kit, add some customizations and pick a target. As a result, script kiddies and other novice cybercriminals can cause havoc without having to get into the nitty gritty of coding, with a variety of toolkits available for attackers 24/7, no matter where they are located. It’s a big business, and will likely continue well into the future.
Now, what impact will ChatGPT have on this trend? This new technology takes the accessibility and the availability of easy-to-use attack tools to a whole other level. Effectively, ChatGPT provides a variety of approaches, insight and even baseline code that can be used to conduct attacks. A code-generating AI system can serve as a translator between languages for malicious actors, allowing them to bridge any skills gap they might have. On-demand tools like these allow attackers to create templates of code relevant to their objectives without having to search through developer sites such as Stack Overflow or Git. This has led many researchers to believe the chatbot significantly lowers the bar for writing malware and creating creative attacks — resulting in a steady growth of concerns over threat actors abusing ChatGPT.
Interest in AI is continuing to explode as modern attacks require rapid detection and response to anomalous user behavior — something an AI model can be trained to quickly identify, but which takes significant human power to replicate manually. Just as bad actors are using automation to help accelerate and enhance attack techniques, you and your team should rethink security approaches to address these techniques and leverage automation with AI/ML tools to help respond to this significant state change in the threat landscape.
Just as self-driving cars are meant to make their passengers safer and more secure, AI-based approaches to cybersecurity can be used to improve threat detection and reduce friction for trusted users. The use of AI to enhance rules-based authentication for human and non-human users is rapidly advancing. Once authentication and authorization requirements are implemented, AI can be used to reduce false positives and negatives in threat detection, resulting in a better user experience.
ChatGPT’s ability to provide authentic-sounding responses to target inquiries will make social engineering account takeover, phishing attacks, and data breaches more common and more successful — despite our best defenses. By updating and adopting these five approaches, organizations can help reduce the risks to themselves and their employees from malicious use of chatbots and other AI tools.
ChatGPT is still new, and it is very likely to change significantly over time. While its responses to many questions can be jaw dropping, users must be wary of using ChatGPT as a source of truth. Much like media bias, ChatGPT carries its own biases today. As such, it is a great tool for reference, but users must also consider authoritative sources, especially when leveraging ChatGPT for critical use cases and applications. For example, if you are a bank that needs to adhere to industry-specific regulations, you should cross-reference information from ChatGPT with reliable resources to ensure that your use of the chatbot does not negatively impact your compliance. And when it comes to cybersecurity, OpenAI strictly prohibits using ChatGPT to create ransomware, keyloggers and malware that is outside the scope of a penetration test or red team engagement. But, as we’ve already mentioned, these restrictions haven’t stopped attackers so far.
For now, AI tools remain glitchy and prone to errors, sometimes described by researchers as “flat-out mistakes” that may hinder attackers’ efforts. Nevertheless, many have predicted that these technologies will continue to be misused in the long run. Developers will need to train and improve their AI engines to identify requests that can be used for malicious purposes, in order to make it harder for criminals to misuse these technologies.