Hello Fraud Fighters!
This week, one of fintech's biggest infrastructure players bet big on AI for financial crime — and chose Claude to power it. Meanwhile… imposter scams are skyrocketing, lenders are watching fraud eat directly into their credit losses, and the NCUA's inspector general confirmed what nobody wanted to say out loud: sometimes the fraud is coming from inside the building.
Let's get into it.
Big Story: The AML Machine Wakes Up
There are roughly 12,000 banks in the world and FIS sits inside 12% of them. So when FIS announces it's building an agentic AI financial crime system powered by Anthropic's Claude, it’s a big deal.
The Financial Crimes AI Agent, announced last week, is designed to compress AML alert investigations from days to minutes. The agent automatically assembles evidence from across a bank's core systems, evaluates transactions against known typologies, and surfaces only the highest-risk cases for human investigator review. BMO and Amalgamated Bank are the first institutions in development with it; general availability is targeted for H2 2026.
The numbers explain why this is overdue. US financial institutions spend between $35 and $40 billion annually on AML compliance. Investigators waste most of their time manually pulling records from disconnected systems before any real analysis can begin. The agent solves the evidence-assembly problem, not by replacing human judgment but by making the raw material for that judgment available instantly and completely.
What makes this worth watching beyond the press release is the deployment model. Anthropic's Applied AI team and forward-deployed engineers are embedded at FIS, co-designing the agent and establishing the evaluation frameworks that will allow FIS to build additional agents independently over time. The roadmap explicitly includes credit decisioning, deposit retention, onboarding, and fraud prevention. FIS CEO Stephanie Ferris stated: "The future is about a trusted provider who manages the data, who governs the agents, and who stands between your customers and the AI making decisions about their money."
The accountability structure matters too. Jonathan Pelosi, head of financial services at Anthropic, noted that every conclusion the agent reaches links back to its source data, and every decision stays with the investigator. For compliance teams thinking about how to explain agentic AI to regulators, that auditability is the value proposition.
The “so what” for fraud operators: the AML investigation workflow is about to become the first genuinely agentic process at scale inside regulated financial institutions. If your institution is on FIS infrastructure, this is worth understanding now — not when the sales cycle shows up at your door. And for everyone else: the clock is ticking on institutions still running AML on batch-era tooling while fraudsters operate in real time.
Quick Hit #1: The FTC's Imposter Scam Numbers Are Grim Reading
The FTC published its annual imposter scam consumer alert this week, and the headline is that imposter scams have now been the most-reported fraud category for nine consecutive years. In 2025, Americans filed more than one million reports about imposter scams with reported losses up nearly 20% to $3.5 billion. Government impersonation reports were up 40%, driven in large part by a surge in fake toll text messages spoofing real programs like EZ-Pass and FasTrak. Romance scam losses rose 22%.

That $3.5 billion sits inside a bigger number: the FTC received 3 million fraud reports in 2025 with total reported losses of $15.9 billion — up from $12 billion the prior year. As a reminder, these are reported losses only; the actual figure is a multiple of that. For fraud teams, the toll-text vector is worth flagging specifically — government impersonation via SMS is now industrialized and the volume is driving real downstream payment fraud as victims are pushed toward crypto ATMs and wire transfers.
Quick Hit #2: 93% of Lenders Say Fraud Is Compounding Their Credit Losses
Celent, commissioned by Zest AI, just published a survey on lending fraud. Of 115 US financial institutions surveyed, 93% say fraud contributes to their credit losses — and 82% say those losses got worse in 2026 compared to the prior year. The top three fraud types: synthetic identity fraud (61%), bust-out fraud (56%), and application stacking (55%).
The structural problem, as Celent principal analyst Craig Focardi puts it, is that these are all cross-institutional attacks, engineered to be invisible within any single lender's portfolio. A fraudster stacking applications across five institutions at once is undetectable if each institution is only looking at its own book. Yet fewer than a third of lenders currently use AI/ML fraud models, alternative data signals, or consortium-based intelligence — the exact tools built to catch what traditional controls miss. Three quarters are increasing fraud tech budgets, but spending more on tools that weren't designed for the threat doesn't solve the problem. The Celent sentence that struck me the most: poorly managed fraud risk "could lead to an existential crisis in the lending business." That's the actual language from the report.
Quick Hit #3: Internal Fraud Killed Two Credit Unions Last Year and Cost $18M
The NCUA's inspector general has published its annual report on 2025 credit union failures and it’s uncomfortable reading: two of the six credit unions that failed last year — Unilever Federal Credit Union in New Jersey ($49M in assets) and Aldersgate Federal Credit Union in Illinois ($3.5M) — were taken down by internal fraud. Combined losses to the NCUA's Share Insurance Fund: approximately $18 million. Two individuals were referred to the DOJ for criminal prosecution.

Internal fraud is the vector nobody wants to talk about because it implicates controls, culture, and board oversight simultaneously. It also tends to stay hidden longer than external attacks — Aldersgate reportedly showed zero loan delinquencies for years, a red flag that examiners apparently didn't press hard enough on. The IG report is a reminder that your fraud program needs to point inward as well as outward. Every institution with concentrated operational authority and weak segregation of duties is carrying this risk, quietly.
Quick Hit #4: UK Friendly Fraud Hit £3.5 Billion Last Year
Not all fraud arrives via deepfake or dark web marketplace. Emerchantpay's recent research estimates UK consumers filed approximately £3.5 billion in chargeback claims through friendly fraud over the past 12 months (disputes filed by actual cardholders who received their goods or services and then claimed they didn't).
Friendly fraud sits in an awkward regulatory space. Consumers have legitimate dispute rights, and merchants rarely have the evidence infrastructure to contest them at scale. But £3.5 billion in a single market over twelve months points to a systemic drain on merchant margins with a cost that ultimately flows back into pricing. For payments and lending operators: chargeback analytics and representment tooling are increasingly not optional. The volume has moved well past what manual review can absorb.
This Week in Fraud is published for fintech operators, fraud teams, and risk professionals. Tips, feedback, or story leads: reply to this email or reach Nick Holland at [email protected].




