Episode Transcript
[00:00:14] Speaker A: Hello, everybody. My name is Zach Schuller. I am the founder and chairman of the board for Ninjio. And joining me today we have Matt Lindley.
[00:00:25] Speaker B: I am the Chief Information and Information Security Officer at Ninjio.
[00:00:31] Speaker A: Don't you have another title, though?
[00:00:32] Speaker B: Yeah, it's a lot of AI background there. But officially, unofficially, Chief AI Officer.
[00:00:39] Speaker A: I like that. I like that the best.
[00:00:41] Speaker B: Yeah, it works for me.
[00:00:42] Speaker A: Yeah, yeah.
[00:00:43] Speaker B: Living in it.
[00:00:43] Speaker A: Yeah, exactly. The audience knows, like, every episode that we do here at Ninjio is based on an actual breach that's really happened.
And today we're going to talk about a pretty major breach. And in fact, we'll get to how major it is.
I'm going to have Matt here take it away and tell the story about what happened and then we'll get into how it happened. So, Matt.
[00:01:08] Speaker B: Yeah, thank you. First of all, happy to be here and excited to talk about this because it's really interesting. One Jaguar Land Rover.
[00:01:15] Speaker A: Wait, isn't it Jaguar?
[00:01:17] Speaker B: Sorry, Jaguar.
[00:01:18] Speaker A: Jaguar.
[00:01:19] Speaker B: Jaguar. Yes, that's right. Apologies to all those out there.
Let's just go with acronyms. JLR had a major breach and what happened was they. And actually it hasn't been officially released as to how exactly the attack happened, but they had their manufacturing facilities and IT systems shut down. They were shut down for over a month, halting the production of cars, halting many other aspects of their IT corporate network. And initially they didn't believe that any data was exfiltrated or stolen in that attack, and then later found out that in fact HR and employee payroll data was stolen and other data related to internal employees and the impact was massive. Production shut down for a month.
[00:02:14] Speaker A: That's insane. So is there any kind of theory out there right now as to what the bad actor was trying to do? Were they just trying to flex and say, hey, we can shut you down for a month?
Or was it getting at the payroll data and stuff like that? Because it seems like shutting down the manufacturing facility just to get it. You know, the standard stuff that every hacker wants to get at, it's a little extreme. So are there theories out there on
[00:02:43] Speaker B: why, yeah, this was done? The bad actors actually claimed it publicly. A group by the name of Scattered Lapsus.
And they are notorious for these types of attacks that will a deploy ransomware for revenue coming from the ransomware payments or just causing mass havoc? Yeah. And in this case, although we don't know exactly how it happened, the typical method that is used is social engineering. And either through email Phone or other.
It's plausible that particularly new attack that we're seeing out there in the wild could have been the, the compromise, the vector of compromise for this.
[00:03:28] Speaker A: So I understand that's actually not only plausible, but it's a plausible theory applied to this.
Tell us what the name of that particular attack is.
[00:03:36] Speaker B: Yeah. This attack is called Invisible Prompt Injection Attack.
[00:03:41] Speaker A: It just seems like these attacks get longer and longer every year that goes by.
[00:03:45] Speaker B: We've got to get creative as they see new attacks coming out.
[00:03:48] Speaker A: Yeah.
So tell me how this particular type of attack could work in an environment not only Jaguar, but like at any organization across the world?
[00:04:03] Speaker B: Yeah. This particular attack interacts with AI systems and the way it interacts, we'll get to that in a minute. But we probably all at this point have used AI either a handful of times or maybe it's integrated into our day to day work. And we're seeing more commonly now organizations have AI built into everyday workflows such as your email or writing documents, creating spreadsheets. This particular attack goes after AI that is integrating an AI assistant that is integrated into your email.
And commonly what you'd see is a button in your email client, Outlook or Gmail that says summarize with AI. And that's an AI assistant looking at the email itself, processing it and summarizing, what's this email about? How do I respond? Are there any additional action items I need to take on for this particular attack? Prompt Invisible Prompt Injection Attack. What happens is a bad actor will hide malicious code or instructions in the email. Or it could be a document attached to the email and that is invisible to the human eye. And oftentimes it could be like a
[00:05:31] Speaker A: super small font or. Exactly right, A font that's white on a white background or something of that nature.
[00:05:38] Speaker B: That's exactly it. Yeah, yeah.
[00:05:40] Speaker A: Okay.
[00:05:41] Speaker B: And what happens is when the AI assistant processes the email or document, it's going to be able to read that, whereas the human won't be able to see that naked invisible eye. Wow. And in some cases where bad actors use this technique, they're prompting to as an action item or a follow up, click on this link to do X. And these links, then they are malicious
[00:06:08] Speaker A: and they go from being, when the email is received, maybe nearly invisible or completely invisible, but then when the AI agent summarizes it, it'll come back in plain old black text, just like the rest of the email formatting just goes out the window and it comes back.
And now it's telling you to click a link and somebody could click the link and then it says something else and they download it and boom, ransomware is all over the organization or whatever the case may be. Whatever happens.
[00:06:42] Speaker B: And there's a key thing with that, right?
[00:06:43] Speaker A: Yeah.
[00:06:45] Speaker B: The AI assistant will return the summary and with those instructions with confidence.
It's giving you this instruction with confidence.
[00:06:56] Speaker A: So let me ask you this because I have my own thoughts on this, but we talk about don't click the link, don't click the link. And even if it was a regular email that came through with a malicious link and there was no AI agent and it came to one of our people or any employee anywhere, they're trained not to click the link.
So why do you think it is that when the AI summary comes back and it's telling them to do something and there's a malicious link there that they are either very likely or at least more likely to click on it, when the education has been don't click the link.
What are your thoughts about that?
[00:07:48] Speaker B: And that's a really good question because I tend to agree with you. I think we read about this stuff in the news so much. We receive the training on how to protect yourself from malicious links and go through those processes. But the difference is when it's coming from your AI assistant, there's this inherent trust that I think we've created with AI because it's, let's be honest, it's super helpful, it's efficient, it helps us get our work done, it'll help us research, and we put a lot of
[00:08:23] Speaker A: faith into it that what's coming back out of it is accurate.
[00:08:26] Speaker B: That's right. That's right.
[00:08:27] Speaker A: And you would think, okay, the AI is going to be smart enough to like, maybe filter out a malicious link, but if it doesn't do that and it just delivers a malicious link, write to us. I completely agree with you. Like, I would trust it.
[00:08:43] Speaker B: Yeah. And trust is, I think, the key factor in this. Whereas when we receive an email directly and we're processing that information, is this link or these instructions safe, unsafe? We're probably going to go through those steps of verifying before we trust. However, when it's from AI, there's this trust connection that we sometimes forget. We still need to do that.
[00:09:10] Speaker A: Yeah.
[00:09:10] Speaker B: We still need to verify before we trust.
[00:09:12] Speaker A: Yeah. And then the ninja sort of state of terms, we would call that obedience.
Right. They're being obedient to what the AI tells them to do.
This is fascinating. Matt, thank you for educating us on, on this new type of attack vector. I got to say. It's pretty scary. It's pretty scary.
And the one thing that I did read about Jaguar Land Rover was that the impact to the British economy was $1.9 billion as a result of this.
[00:09:51] Speaker B: Yeah.
[00:09:53] Speaker A: And that would go down by far as the most expensive breach in UK history, probably in European history.
Right. I can't say that for sure, but $1.9 billion, that's a lot of money and that's pretty insane. Thanks for coming on and talking about this. We're going to have many more of these. And audience, thank you so much for listening. And stay safe out there. Take care.