The Problem
The SEC fined two investment firms a combined $400,000 for lying about their AI. Not for building bad AI. For claiming they had AI they never actually built.
In March 2024, the SEC charged Delphia (USA) Inc. and Global Predictions Inc. with making false statements about their use of artificial intelligence. Delphia told clients it used a "predictive algorithmic model" that analyzed spending patterns and social media data to "predict which companies and trends are about to make it big." The SEC's examiners found that Delphia never actually integrated that data into its investment process. The firm kept making these claims from 2019 to 2023 — even after the SEC warned them in 2021 to stop.
Global Predictions called itself the "first regulated AI financial advisor." When regulators asked for technical documentation to back that up, the firm couldn't produce it. Delphia paid $225,000. Global Predictions paid $175,000.
These weren't complex fraud schemes. They were marketing claims that outran engineering reality. And they represent a new category of corporate risk that your legal and compliance teams need to understand right now. If your company says "powered by AI" anywhere — on your website, in investor materials, in client pitches — you are exposed to this exact same enforcement risk. The SEC used existing antifraud laws, not new AI-specific rules. That means the legal framework to punish your company already exists today.
Why This Matters to Your Business
The $400,000 in SEC penalties was just the opening salvo. Multiple federal agencies are now actively hunting for AI claims they can challenge.
The FTC launched "Operation AI Comply" to target deceptive AI claims across the consumer economy. In January 2025, the FTC shut down DoNotPay — a service that marketed itself as "the world's first robot lawyer" — because the company couldn't prove its AI could actually replace a human attorney. The FTC also went after Evolv Technologies for allegedly misrepresenting how well its AI sensors could detect weapons in schools.
The Department of Justice announced its "Justice AI" initiative. Federal prosecutors will now evaluate your company's ability to manage AI risks as part of corporate compliance reviews. They've been told to seek harsher penalties when AI is used to enable white-collar crime.
Here's what this means for your bottom line:
- Regulatory fines are the floor, not the ceiling. The $225,000 Delphia penalty is small compared to the reputational damage and lost business that follows a public enforcement action.
- Your AI vendors create your liability. The FTC has established that if an AI tool serves as an "instrumentality" for deception, the provider and the enterprise customer can both be held responsible.
- State attorneys general are joining in. They're using existing consumer protection statutes to pursue AI marketing claims at the state level.
- Board exposure is real. If your company claims AI-driven capabilities in SEC filings or investor presentations, your officers carry personal liability under existing antifraud rules.
This isn't a future problem. The enforcement infrastructure is built and active.
What's Actually Happening Under the Hood
The root cause of AI washing is a technical misunderstanding of how modern AI actually works. Most AI applications today — including the ones your vendors may be selling you — are built on systems that predict the most likely next word in a sequence. They don't verify facts. They generate text that sounds plausible.
Think of it like autocomplete on your phone, but scaled to the size of the internet. Your phone doesn't know if the next word is true. It just knows what word usually comes next. Large Language Models (LLMs) — the AI systems behind tools like ChatGPT — work the same way, just with more data and more math. The model has no internal concept of truth. It predicts the most likely sequence of characters based on patterns in its training data.
This is why AI systems "hallucinate" — they generate citations that don't exist, financial figures that are wrong, or legal precedents that were never decided. In a regulated environment, "mostly correct" is legally the same as "incorrect." A hallucinated legal citation creates active legal liability for your organization.
Making this worse, the majority of "AI solutions" being sold to enterprises today are what the industry calls "wrappers." These are applications that connect to a public AI service through an API — essentially a plug — and add a user interface on top. They're designed to minimize costs, often at the expense of accuracy. They can't verify their own reasoning. They simply pass along whatever the underlying model generates, regardless of whether it's true. Your company then puts its name and reputation behind that output.
What Works (And What Doesn't)
Let's start with what doesn't protect you:
- Prompt engineering alone. Carefully wording your instructions to an AI doesn't prevent it from hallucinating. It reduces the frequency but doesn't eliminate the risk.
- Thin API wrappers. Connecting to a public AI model and adding your branding gives you zero control over accuracy, data privacy, or audit trails. These wrappers are vulnerable to bad inputs that cause wrong outputs — a problem called "retrieval poisoning."
- Manual spot-checking. Having a human review a random sample of AI outputs doesn't catch the errors in the outputs you didn't review. It creates a false sense of safety.
What actually works is a fundamentally different architecture. Here's how it functions in three steps:
Grounded retrieval through knowledge graphs. Instead of letting the AI search loosely for related information, you build a structured map of your domain knowledge — a Knowledge Graph. In legal AI, for example, every court opinion links to other opinions through specific relationships like CITES, OVERRULES, or AFFIRMS. The system can only reference what exists in this verified structure. This moves retrieval from a statistical guess to a verified path.
Multi-agent verification. Rather than asking one AI model to research, verify, and write, you split the work across specialized agents. A "Research Agent" retrieves the raw data. A "Verification Agent" cross-checks it against your knowledge graph. A "Writer Agent" produces the final output using only verified facts. These agents review each other's work in cycles before any human sees the result.
Sovereign deployment on your infrastructure. Your AI runs inside your own firewall — either on-premises or in a private cloud isolated from the public internet. This ensures zero data leakage. Your intellectual property, client data, and trade secrets never touch external servers. This is mandatory for compliance with HIPAA, GDPR, and CCPA.
The audit trail advantage is what makes this approach defensible. Every output traces back to specific source documents through verified links. When a regulator, auditor, or opposing counsel asks "how did your AI reach this conclusion," you can show the exact path — from source data, through verification, to final output. That's the difference between a system you can defend and a system that becomes your next liability.
Your governance framework should pair the NIST AI Risk Management Framework for tactical risk controls with ISO/IEC 42001 for certifiable, third-party-audited compliance. ISO 42001 is the only certifiable international standard for AI management systems. A third-party auditor can verify your compliance — giving you the documentation you need for procurement reviews, board reports, and regulatory submissions.
Finally, build an AI Bill of Materials (AIBOM) — a complete, machine-readable record of every component in your AI system. This includes training datasets, base models, third-party libraries, and infrastructure details. Your AIBOM becomes the single source of truth that proves your AI claims match your technical reality. That's exactly the documentation Delphia and Global Predictions couldn't produce when the SEC came calling.
Key Takeaways
- The SEC fined two firms a combined $400,000 for claiming AI capabilities they never built — using existing antifraud laws, not new AI regulations.
- The FTC, DOJ, and state attorneys general are all actively enforcing against exaggerated AI claims, creating multi-agency liability exposure.
- Most enterprise AI tools are thin wrappers around public models that cannot verify their own outputs or protect your data.
- Deterministic architectures using knowledge graphs, multi-agent verification, and sovereign deployment create audit trails that regulators can verify.
- Pairing NIST AI RMF with ISO 42001 certification and maintaining an AI Bill of Materials gives your organization defensible documentation.
The Bottom Line
Federal regulators are actively fining companies for AI claims they can't prove. Your defense isn't better marketing — it's an architecture that produces verifiable outputs with complete audit trails. Ask your AI vendor: can you produce a complete AI Bill of Materials and show the verified source path for every output your system generates?