For years, fraud prevention was built on a simple assumption: the attacker is on the outside, trying to get in. Nearly every control in the modern detection stack reflects that model, focusing on blocking unauthorized access before it occurs.
With the introduction of advanced AI models such as Anthropic Mythos, that assumption is starting to break down. The fraud battle is no longer happening at the perimeter. It is increasingly moving inside the user’s device, within sessions that appear fully legitimate. Most organizations have not yet internalized what that shift means for their fraud and detection stack.
A new class of agentic AI has changed the shape of the problem. Not by improving existing attack methods, but by removing the need for them entirely. The attacker is no longer attempting to breach the session. They are operating inside it, using the same signals your systems rely on to establish trust. Most detection stacks were not designed for that reality.
The shift most organizations have not internalized
The latest generation of AI models can autonomously discover vulnerabilities in consumer applications, establish a foothold on the device, and operate from within an authenticated session. This is not a scenario that depends on phishing or social engineering, and it does not require the user to make a mistake. From the perspective of your fraud stack, everything appears normal. The device is recognized, the session is valid, and authentication has already been completed. Behavior often remains within expected ranges, because the attacker is working within the same context as the legitimate user. The system is functioning exactly as it was designed to. It just no longer reflects the threat it is meant to detect.
Why the device is now the attack surface
Enterprise environments are heavily monitored and tightly controlled, with layers of visibility and protection built into the endpoint. Consumer devices operate very differently, and that difference is now being actively exploited.
In most cases, these devices run without endpoint detection, generate little to no centralized telemetry, and operate on unpatched software for extended periods. Applications are often granted broad permissions with minimal oversight, and users typically operate with elevated privileges by default. This makes the consumer device one of the least protected systems in your ecosystem. At the same time, it remains one of the most trusted signals in your fraud stack. That gap between actual security and assumed trust is where this new class of fraud operates.
This is not an evolution of phishing
It is tempting to frame this as a more advanced version of existing attacks. That instinct is understandable, but it leads to the wrong conclusions about how to respond. Traditional fraud relies on the attacker being outside the session and trying to get past the perimeter. It depends on the user making a mistake, whether that is clicking a link, entering credentials, or approving a request they should not. Agentic fraud removes that dependency entirely. Once an AI agent is operating on the device, the problem is no longer access. It is control, and control is exercised from within a session your systems already trust.

What actually happens inside the session
Once a device is compromised, the sequence unfolds in a way that is both simple and difficult to detect. An agent establishes a foothold on the device and begins observing activity, mapping authentication flows, session behavior, and transaction patterns over time.
As it builds that understanding, it gains access to the same trust signals the user has already established. Authentication events, saved credentials, and session state all become part of the environment the attacker can operate within. By the time a transaction is executed, it is happening inside a session that appears fully legitimate. The attack may not generate the kind of signal your controls were built to detect.
Why traditional controls fail
Most detection systems rely on a combination of device trust, authentication strength, and behavioral signals. Each of these continues to function as designed, but the context in which they operate has changed. Device fingerprinting confirms that the device is legitimate, which in this case it is. MFA confirms that the user authenticated, which they did. Behavioral biometrics continue to read as human, because the agent is designed to mimic human interaction at a high level of fidelity.
Rules and models that depend on known patterns struggle the most, because this type of activity does not align with previously observed fraud signatures. The result is not just degraded performance, but a structural blind spot that is difficult to compensate for.
The shift that matters
This is not a tooling problem. It is a model problem rooted in how fraud detection has been designed over time. Most approaches are built around identifying risk after a signal appears. That works when the attacker is external and attempting to gain access. When the attack originates from within the session, those signals either appear too late or do not appear at all.
The objective has to change. Instead of focusing on detecting anomalies after they surface, organizations need to continuously evaluate whether the session itself can still be trusted as it evolves. That requires a different set of capabilities, including understanding who or what is in control of a session, detecting automation at the device level, and evaluating whether session behavior remains consistent with legitimate user activity in real time. Trust can no longer be assumed based on a completed authentication event. It has to be re established continuously.
What happens if nothing changes
As this model scales, the impact becomes predictable. Missed fraud rates increase because attacks no longer generate detectable signals, while false positives rise as systems attempt to compensate for what they cannot see. More importantly, when detection fails silently, organizations lose the ability to explain what happened or why controls did not trigger. That makes response, auditability, and recovery significantly more difficult over time. The issue is not that existing controls are ineffective. It is that they were built for a different threat model, one that assumed the attacker was always on the outside.
Where this goes next
The organizations that adapt will be the ones that recognize this shift early and adjust their approach accordingly. That does not mean simply adding more controls, but rethinking what trust means within a session. Once fraud moves inside the device, the question is no longer whether access was legitimate. The more important question is whether that trust still holds as the session unfolds.
To understand how this shift impacts your organization, request an executive briefing with our team. We will walk through the threat model, where existing controls break down, and what a path to continuous trust validation looks like in practice.



