Table of Contents

Webinar: Is It Time to Panic? Attackers Are Leveraging ChatGPT – But Security Can Too

With a carefully worded prompt, it’s possible to trick ChatGPT to write malicious code or pinpoint vulnerabilities in code you provide. So is it time to panic? The short answer: no, it’s time for a steady hand. It’s a pivotal moment — time to get educated and take action before you get left behind or become a target. In a recent webinar, Sr. VP of Editorial Tom Field at Information Security Media Group (ISMG) sat down with Transmit Security Chief Identity Officer David Mahdi to explore the full implications of ChatGPT. 

As a former Gartner analyst, David has closely tracked the evolution of AI and machine learning in cybersecurity. More recently, we’ve witnessed great progress in the areas of fraud detection and behavioral biometrics, where machine learning and neural networks are emerging as true game changer. What’s notable now is that generative AI is at everyone’s fingertips, so we need to understand what this means for hackers and security teams. 

You can watch the webinar now (27 min) hosted on Data Breach Today, one of ISMG’s 35 media sites. Or keep reading to get a few highlights of their conversation in this Q&A pulled straight from the recording:  

Tom Field, Sr. VP of Editorial at ISMG:
Alright, David, ChatGPT. Set the context. Why is this something that’s emerged as the topic in 2023? … I have lots of conversations with CISOs and CIOs and some round table discussions like I’m conducting today, and it doesn’t matter what the topic is…every conversation comes back to ChatGPT. And there are three things I typically hear. The first question I get is, “What can we use this for internally today?”

David Mahdi, Chief Identity Officer at Transmit Security: 
That’s a great question, so first…we’ve got a strong team of threat researchers in Tel Aviv, Israel. They’ve been monitoring the dark web, they’ve been looking at these tools. And also looking at these tools positively. So we’ve asked ourselves, how can we use this? 

It’s a very powerful tool that can be used for mining information. If your organization has lots of data, lots of text that you need to search through, you could use something like ChatGPT pointed at a repository of this information, and it could go through it. So if you’re looking for stats, if you’re looking for other types of information, or going through logs, it has the ability to do that. 

But ChatGPT, in and of itself, is not very smart. It is just looking at two years back of the world wide web. So internally, teams could point [ChatGPT] at knowledge bases, if they want to traverse known things or maybe proprietary information they have internally, software tools or you name it. So that is one use that I’ve discussed with security leaders at organizations in Canada and around the world where they’re using that. So it can really help with efficiency from that perspective.

Tom:
So here’s the 2nd question I get. Will this affect our security posture?

David:
Yes, 100%. I am confident to say, it absolutely will. And there are a few reasons for that. So if you’re using this stuff internally and you have pointed say ChatGPT or ChatGPT-like technology at some information that you may be sitting on, we have to think about privacy. You have to think about the data that you’re looking at there, and you don’t want any PII or personally identifying information or intellectual property to leak out. 

So you also have to be careful with what it’s actually referencing. It could be copyright material where it’s looking at copyright but not your copyright. You might be licensed to read a Gartner research report, and you’re going to point ChatGPT at it. But if that stuff starts to leak out, you might be violating your terms and conditions with that particular vendor.

The other angle though is what I was alluding to earlier on the dark web. There’s always been sharing of information of different kinds of [hacking] tools, different malware kits, ransomware as a service, all of these things are there for bad actors to leverage and repurpose it to attack however they want to. 

It’s not going to be any different with ChatGPT and any variants of it. As of 3 weeks ago now there’s AutoGPT, which I won’t get into here, but with AutoGPT you could do some pretty insane stuff with that. ChatGPT, by manipulating it a little bit, you can get it to give you some snippets of code to write a keylogger and Python

And if you were a script kiddie, someone who doesn’t have a lot of coding experience, it’s basically spoon feeding you, Tom, all that information. Now, some people have complained that with ChatGPT that we see today, it might give you some buggy code. Okay, well, AutoGPT can actually find those issues and fix them automatically. And so that’s all in GitHub right now. And people are doing things like regression testing and all sorts of things. 

But will it affect the threat landscape? Absolutely bad actors are going to start using this. I mean think about it. If they share reconnaissance information on targets, they could share that and have it easily searchable with these types of tools. Malware techniques, coding techniques – all for malicious purposes – all of these can be accessed. 

Keep learning about ChatGPT

These are just a few highlights of the webinar, cut down to give you key insights. If you’re hungry for more, watch the full webinar on Data Breach Today. Hear how David responds to these and other questions: 

  • What are the implications for both good and the bad actors?
  • Is this going to change the skill sets of employees that companies need?
  • Should we have a policy or even an outright ban on generative AI?
  • What should we be telling children about ChatGPT? 
  • Is this really an application of AI or is it machine learning?
  • Where does this end? What’s in the more distant future?

How Transmit Security is using AI and ML

Transmit Security has always been on the leading edge of AI and machine learning, building it into our cybersecurity and identity solutions. We will be doing the same with generative AI, and you can expect an announcement from us soon. In every way possible, we’ve built intelligence and automation into the core of our identity and anti-fraud services. And by studying the threat landscape intimately, the Transmit Security Research Lab continually feeds algorithms with threat intelligence to detect the latest threat patterns.  

AI and machine learning (ML) are integral to our customer identity and access management (CIAM) services, including:

  • AI and ML-based fraud detection: replaces manually managed, rule-based anti-fraud tools with next generation AI-ML-native tool sets. Device fingerprinting, advanced bot detection and fraud prevention — powered by AI and ML— detect more attacks, even AI-ML threats, with higher accuracy.
  • AI-based behavioral biometrics: proven to reduce fraud by 98%, behavioral biometrics looks at the user’s behavior in an active session to assess if it’s consistent with the user’s typical behaviors. Activity is also compared to up-to-date threat intelligence. Any signal that strays from the norm is weighed as part of a holistic risk analysis.
  • AI-ML phishing-resistant authentication: non-phishable credentials minimize the risk of ChatGPT-like (AI-ML) phishing and social engineering to protect customer accounts from the most common attack MOs.
  • AI-based document validation: inspect and identify fake or stolen IDs with support for 10k global documents and data verification.


As our developers work with AI, they’re continually solving problems to improve fraud and risk detection. In a recent blog, “Solving AIs Black-Box Problem with Explainable AI and SHAP Values,” we explain how we’ve overcome a key challenge in fraud detection. Our solutions are designed to help teams quickly detect complex risk patterns as they emerge and return recommendations with full transparency.

Discover how Transmit Security Detection and Response Service leverages AI and ML and start envisioning what’s on the horizon with generative AI. Very soon, we’ll begin to show you what’s possible.

Author

  • Brooks Flanders, Marketing Content Manager

    In 2004, the same year the U.S. launched the National Cyber Alert System, Brooks launched her career with one the largest cybersecurity companies in the world. With a voracious curiosity and a determination to shed light on a shadowy underworld, she's been researching and writing about enterprise security ever since. Her interest in helping companies mitigate deceptive threats and solve complex security challenges still runs deep.

    View all posts