Table of Contents

AIの力を発見: ChatGPTが複数の業界でどのように変革をもたらしているか

ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot developed by OpenAI based on the GPT-3.5 large language model. It has a remarkable ability to interact in conversational dialogue form and can provide responses that appear surprisingly human — generating massive excitement from users, who are leveraging its capabilities in a range of new ways that are certain to have a big impact on the future. This blog post will examine the many uses of ChatGPT and how businesses can adapt to the rapid changes it signals in the use of AI.

What is ChatGPT? 

Large language models like those used in ChatGPT perform the task of predicting the next word in a series of words. As a complex machine learning model, ChatGPT is able to carry out natural language generation (NLG) tasks with such a high level of accuracy that the model can pass a Turing Test, which measures a machine’s ability to demonstrate intelligent behavior indistinguishable from human behavior. During its training phase, ChatGPT was trained on a massive amount of unlabeled data scraped from the Internet before 2022. It is constantly monitored and fine-tuned with additional datasets labeled by humans. This process is called reinforcement learning from human feedback (RLHF): an additional layer of training that uses human feedback to help ChatGPT learn the ability to follow directions and generate responses that are satisfactory to humans.

Will ChatGPT Disrupt Search Engines and Other Applications?

Although ChatGPT was just released in November 2022, the Internet has been buzzing with users who have flooded ChatGPT with all kinds of questions on a variety of topics. The demand is so great that as of the writing of this blog, ChatGPT is at capacity and currently unavailable.

AIの力を発見: ChatGPTが複数の業界でどのように変革をもたらしているか - ChatGPT is at capacity right now error

Users have overwhelmed the service to gather insight on topics such as:

  • Real Estate: Some users are soliciting real-estate advice from the chatbot, asking it questions like “Should I buy or rent a home in Toronto, Canada?”
  • School Education & Assignments: Students have been flooding the system with questions that relate to school projects. In fact, ChatGPT just passed an MBA exam from Wharton. This has raised concerns, so much so that some have aimed to block the site on school networks.
  • General Search: Users have been experimenting with ChatGPT’s search capabilities by entering in common search questions, such as “What dishwasher is the best” or “Should I buy a MacBook M1 Pro.” With human-like “chat” interactions, ChatGPT has the potential to offer more direct answers to users than standard Google searches, which generally require users to comb through results in order to answer their questions. Microsoft is so bullish on ChatGPT’s search capabilities that they have announced they will be leveraging ChatGPT for Bing, after recently investing $10B (yes, that’s a B, for billion) in OpenAI — and investments are not likely to end there.
  • Software Engineering: Developers are constantly solving problems using a variety of computer languages (i.e. C++, Python, etc.), but it’s easy to get stuck. Traditionally, this might cause many developers to comb the web, searching Q&As in repos such as GitHub or Stack Overflow for previously known solutions or workarounds, which often must be stitched together to solve their specific problem. Now, in an increasing number of areas, ChatGPT can give developers bespoke solutions on a silver platter, sometimes even providing code along with documentation. But developers should be aware that they can’t just simply cut and paste ChatGPT results, which have been known to have bugs and other issues. In fact, such issues caused Stack Overflow to ban ChatGPT as usage was ramping up.

AIの力を発見: ChatGPTが複数の業界でどのように変革をもたらしているか - chatGPT every dev right now

  • Cybersecurity:  Usage of ChatGPT in this area is certainly a double-edged sword. On the positive side, security analysts and teams can leverage the direct and informative nature of ChatGPT to help with building the foundations for scripts and looking up common API flags, commands, functions and other tasks. Unfortunately, ChatGPT is also quite beneficial for attackers. Below, we cover some malicious uses of the technology that security researchers are already seeing in the dark web and beyond.

Why is ChatGPT so appealing to attackers?

OpenAI’s artificially intelligent chatbot ChatGPT is being used by cybercriminals to quickly build hacking tools. Scammers are also testing ChatGPT’s ability to impersonate young females as a tool for ensnaring victims.

Users of ChatGPT raised the alarm shortly after its launch in late November that the app had the potential to create ransomware or code malicious software capable of spying on users’ keyboard strokes. Now, it should be stated that ChatGPT has some basic filtering of inputs. For example, ChatGPT will not produce any results if you ask direct questions with clear malicious intent, as shown below.  

AIの力を発見: ChatGPTが複数の業界でどのように変革をもたらしているか - ChatGPT ethical response to keylogging request

Unfortunately, attackers and researchers have already found workarounds — in some cases, simply by rephrasing questions, as the screenshots below illustrate.

AIの力を発見: ChatGPTが複数の業界でどのように変革をもたらしているか - chatGPT keylogger output

From writing malware to hacking websites with XSS

Many have already heard about ChatGPT’s coding abilities, but ChatGPT can also generate code in response to specific commands — something even Google isn’t capable of. The system has the ability to create basic, self-contained HTML and JavaScript pages, as well as more complex programs.

A few examples of hacking while using ChatGPT could include anything from creating a scraping bot to scripting complex malware and XSS attacks to writing a phishing email. An example of a phishing email generated by ChatGPT is shown in the example below.

AIの力を発見: ChatGPTが複数の業界でどのように変革をもたらしているか - chatGPT phishing email

No experience needed: a new era of automation for attackers

Several decades ago, attackers needed to understand computer and network architecture as well as software engineering principles. Writing and deploying viruses and other kinds of malware was not a trivial effort. And frankly, since the world wasn’t as connected, the potential targets and payback weren’t as attractive. But as time went on, and the world continued to digitalize and connect, more valuable data and information was placed in the digital realm. 

To gain access to these assets, attackers upped their game by creating standardized malware kits, which they monetized by creating popular malware kits that could be sold for hundreds or thousands of dollars. Now, all an attacker needs to do to launch a successful attack is acquire a kit, add some customizations and pick a target. As a result, script kiddies and other novice cybercriminals can cause havoc without having to get into the nitty gritty of coding, with a variety of toolkits available for attackers 24/7, no matter where they are located. It’s a big business, and will likely continue well into the future.

Now, what impact will ChatGPT have on this trend? This new technology takes the accessibility and the availability of easy-to-use attack tools to a whole other level. Effectively, ChatGPT provides a variety of approaches, insight and even baseline code that can be used to conduct attacks. A code-generating AI system can serve as a translator between languages for malicious actors, allowing them to bridge any skills gap they might have. On-demand tools like these allow attackers to create templates of code relevant to their objectives without having to search through developer sites such as Stack Overflow or Git. This has led many researchers to believe the chatbot significantly lowers the bar for writing malware and creating creative attacks — resulting in a steady growth of concerns over threat actors abusing ChatGPT.

How can you adapt to enhanced AI capabilities?

What can you do with AI today?

Interest in AI is continuing to explode as modern attacks require rapid detection and response to anomalous user behavior — something an AI model can be trained to quickly identify, but which takes significant human power to replicate manually. Just as bad actors are using automation to help accelerate and enhance attack techniques, you and your team should rethink security approaches to address these techniques and leverage automation with AI/ML tools to help respond to this significant state change in the threat landscape. 

Using AI for greater efficiency

Just as self-driving cars are meant to make their passengers safer and more secure, AI-based approaches to cybersecurity can be used to improve threat detection and reduce friction for trusted users. The use of AI to enhance rules-based authentication for human and non-human users is rapidly advancing. Once authentication and authorization requirements are implemented, AI can be used to reduce false positives and negatives in threat detection, resulting in a better user experience.

Rethinking security approaches

  1. Updating employee education: As more and more attackers begin to leverage ChatGPT for phishing, these attacks could get much more cunning. Employees who understand how ChatGPT can be leveraged are likely to be more cautious about interacting with chatbots and other AI-supported solutions, preventing them from falling victim to these types of attacks. Of course, they don’t need to know the ins and outs of ChatGPT; they simply need to be much more vigilant in monitoring emails and other comms.
  2. Implementing strong authentication protocols: Strong authentication is a no-brainer today, but unfortunately, attackers already know how to take advantage of users tired of reauthenticating through multifactor authentication (MFA) tools. As such, leveraging AI/ML to authenticate users through intelligent digital fingerprint matching and avoiding passwords altogether can help increase security while reducing MFA fatigue, making it more difficult for attackers to gain access to accounts.
  3. Adopting a zero-trust model: By only granting access after identity verification and ensuring least-privilege access, security leaders can reduce the threat from attackers who leverage ChatGPT to get around unsuspecting users. These controls will block access for cybercriminals who cannot verify their identity and limit the damage attackers can do if they are able to circumvent identity verification protocols. Limiting the access of not only developers, but also leadership — including technical leadership — to the fewest resources needed to perform their jobs effectively may initially be met with resistance, but it will make it harder for an attacker to leverage any unauthorized access for gain. In addition, security leaders must not assume that successfully authenticated users are not vulnerable to compromise while they are in a “logged-in state.” Zero trust requires a constant monitoring of risk, trust, fraud, bots & behavior to detect threats in real time.
  4. Monitoring activity and trust: After a user authenticates, they are in a relatively high state of trust. However, trust isn’t static; it erodes over time. As such, activity must be continuously monitored on corporate accounts using tools — including spam filters, behavioral analysis and keyword filtering — to identify and block malicious messages. No matter how sophisticated the language of a phishing attack is, your employees and consumers cannot fall victim to it if they never see the message. Quarantining malicious messages based on the behavior and relationship of the correspondents, rather than specific keywords, is more likely to block malicious actors.
  5. Leveraging AI and automation for faster and more accurate threat response: Cyberattacks are already leveraging AI, and ChatGPT is just one more way that they will use it to gain access to your organization’s environments. Organizations must take advantage of the capabilities offered by AI/ML to improve cybersecurity and respond more quickly to threats and potential breaches.

What should you anticipate?

ChatGPT’s ability to provide authentic-sounding responses to target inquiries will make social engineering account takeover, phishing attacks, and data breaches more common and more successful — despite our best defenses. By updating and adopting these five approaches, organizations can help reduce the risks to themselves and their employees from malicious use of chatbots and other AI tools.

Conclusion

ChatGPT is still new, and it is very likely to change significantly over time. While its responses to many questions can be jaw dropping, users must be wary of using ChatGPT as a source of truth. Much like media bias, ChatGPT carries its own biases today. As such, it is a great tool for reference, but users must also consider authoritative sources, especially when leveraging ChatGPT for critical use cases and applications. For example, if you are a bank that needs to adhere to industry-specific regulations, you should cross-reference information from ChatGPT with reliable resources to ensure that your use of the chatbot does not negatively impact your compliance. And when it comes to cybersecurity, OpenAI strictly prohibits using ChatGPT to create ransomware, keyloggers and malware that is outside the scope of a penetration test or red team engagement. But, as we’ve already mentioned, these restrictions haven’t stopped attackers so far.

For now, AI tools remain glitchy and prone to errors, sometimes described by researchers as “flat-out mistakes” that may hinder attackers’ efforts. Nevertheless, many have predicted that these technologies will continue to be misused in the long run. Developers will need to train and improve their AI engines to identify requests that can be used for malicious purposes, in order to make it harder for criminals to misuse these technologies.

Author