AI Compliance & Verification

Your AI Claims Are Now Evidence.
Can You Prove Them?

The SEC, FTC, and state attorneys general are enforcing AI marketing claims with the same tools they use for securities fraud. Three agencies, 53 class actions, and penalties reaching criminal charges. The question is no longer whether your AI works. It's whether you can prove it does what your filings say it does.

$42M+

Raised on fabricated AI claims (Nate Inc)

SEC/DOJ parallel charges, April 2025

53

AI-related securities class actions filed

Stanford Law, through H1 2025

$11.5M

Median settlement in AI securities suits

D&O Diary analysis, 2025

Veriprajna builds the verification architecture and substantiation documentation that makes AI claims defensible. Not governance dashboards. The actual evidence chain.

The Enforcement Landscape: Three Agencies, One Message

AI washing enforcement is bipartisan, multi-agency, and accelerating. The SEC created a dedicated unit for it. The FTC is running enforcement sweeps. State AGs have new statutory tools. Understanding who enforces what, and how, is the first step toward defensible compliance.

Agency Legal Framework Key Precedent What They Ask For Max Exposure
SEC (CETU) Advisers Act §206(2), Marketing Rule, Securities Act §17(a) Delphia ($225K), Presto (cease-and-desist), Nate ($42M fraud + DOJ criminal) Technical documentation proving AI capabilities match disclosures. Operational evidence of AI influence on decisions. Criminal charges (up to 20 years), civil penalties, disgorgement
FTC FTC Act Section 5 (unfair/deceptive practices) DoNotPay ("robot lawyer"), Workado (claimed 98% accuracy, tested at 53%) Evidence that AI performs as advertised. Accuracy metrics with real-world test methodology. Consent decrees, product bans, per-violation penalties
State AGs UDAP statutes, Colorado AI Act, Texas RAIGA, NY AI laws Colorado SB 205 (effective June 2026): impact assessments, consumer notification, $20K/violation Risk management programs, impact assessments, consumer disclosure records, human review processes. $15K-$20K per violation per day (NY/CO), civil investigative demands (TX)
DOJ Justice AI Initiative, wire fraud, securities fraud Nate Inc (parallel SEC/DOJ, criminal fraud charges against founder) Corporate compliance assessments. AI risk management evaluated as part of overall compliance. Federal criminal prosecution, enhanced sentencing for AI-facilitated fraud
EU (AI Office) EU AI Act Article 50, GPAI provisions Code of Practice on AI content labeling (final June 2026), Article 50 enforcement August 2026 Machine-readable content marking, transparency documentation for GPAI models, C2PA-compatible provenance. Fines up to 3% of global annual turnover

The Enforcement Pattern

Every enforcement action follows the same logic: the agency compares what you said about your AI against what your AI actually does. Delphia claimed ML-powered investment decisions but never integrated the data. Presto claimed AI eliminated human order-taking when 70%+ of orders required humans. Nate claimed 90%+ automation when the rate was essentially zero.

The common failure isn't bad AI. It's the gap between marketing and technical reality, and the absence of documentation that could close it. The SEC's 2026 examination priorities explicitly state they will "review for accuracy registrant representations regarding their AI capabilities." If you can't produce a substantiation package on demand, you are exposed.

The Substantiation Problem: What Examiners Actually Ask For

Most enterprises have AI governance policies. Very few have substantiation. Governance tells you that you should document your AI systems. Substantiation is the actual documentation, tested and ready for production under examination.

What a Substantiation Package Contains

  • 1.Claim-to-System Map: Every public AI claim (10-K, website, press releases, pitch decks) linked to the specific system component that delivers it. If your filing says "AI-driven risk analysis," the map shows which model, which data pipeline, and which decision point.
  • 2.Technical Evidence Binder: Model architecture documentation, training methodology, performance benchmarks against the specific metrics you've claimed. Tested, not theoretical.
  • 3.Operational Validation: Evidence that the AI actually influences the decisions you claim it influences. This is where Presto failed. The system existed, but it wasn't doing what the marketing said.
  • 4.AIBOM: Machine-readable inventory of every component. Training data lineage, model versions, third-party dependencies, infrastructure specs. SPDX 3.0 or CycloneDX 1.6 format.
  • 5.Continuous Monitoring Evidence: Logs showing ongoing validation. Drift detection results. Automated test outputs. Not a one-time snapshot, but a living record.

Where Most Firms Fall Short

  • No claim inventory. Marketing, investor relations, and engineering operate in silos. Nobody maintains a master list of what the company has publicly claimed about its AI.
  • Vendor claims treated as own claims. You use a third-party AI API and repeat the vendor's accuracy metrics in your 10-K. The SEC considers those your claims. Do you have independent validation?
  • Stale documentation. The model was documented at launch. Three versions and two retraining cycles later, the documentation describes a system that no longer exists.
  • No operational proof. The AI exists in production, but there's no evidence it actually influences the decisions described in disclosures. It may run alongside human decisions without meaningful impact.
  • Content verification gaps. AI-generated content (reports, analyses, marketing materials) lacks provenance tracking. If content is later found to contain hallucinations, there's no audit trail to the source.

A Concrete Example: The Content Verification Problem

An enterprise uses an LLM to generate financial analysis reports distributed to clients. The LLM cites a statistic: "Q3 revenue grew 12.4% year-over-year." The statistic is plausible but fabricated. The LLM generated it because the pattern of financial reports typically includes YoY revenue figures, and 12.4% is a statistically likely number for the sector.

In a standard RAG pipeline, the system retrieved a document mentioning the company's revenue but didn't contain the specific YoY figure. The LLM filled the gap. No verification layer caught it because the retrieval scored the document as "relevant" and the LLM's output was fluent and formatted correctly.

With a verification architecture: the system queries a structured knowledge graph for the specific metric. If the graph doesn't contain a verified Q3 YoY figure for that company, the output is blocked or flagged for human review. The audit trail shows exactly which claims were graph-verified and which were blocked. That audit trail is what an examiner can review.

The Vendor Landscape: Governance Platforms vs. Verification Architecture

The AI governance market is maturing quickly. Knowing what each category of vendor does well, and where the gaps are, helps you build a compliance stack that actually holds up under examination.

Category Examples What They Do Well What They Don't Do
AI GRC Platforms Credo AI (Forrester Leader), OneTrust AI, WrangleAI AI inventory management, policy packs, risk scoring, audit-ready compliance reports, regulatory mapping Don't build verification architecture. Don't produce claim-specific substantiation evidence. Don't construct AIBOMs at the technical level.
AI Lifecycle Governance IBM watsonx.governance (IDC Leader), Fiddler AI Full ML lifecycle monitoring, drift detection, explainability, bias monitoring across IBM + third-party stacks Require IBM ecosystem buy-in for deepest features. Monitoring, not architecture. Can't build custom verification layers.
AI Auditing Specialists Holistic AI, Credo AI (audit module) Algorithmic bias testing, fairness assessments, LLM hallucination/toxicity monitoring, shadow AI detection Assessment-focused, not remediation. Identify issues but don't build the systems that fix them.
AI Supply Chain / AIBOM Legit Security, OWASP AIBOM Generator, cdxgen AIBOM generation, software supply chain security for AI, CI/CD integration Security-focused, not compliance-focused. Don't map AIBOMs to regulatory requirements or produce substantiation packages.
Content Authenticity C2PA/Content Credentials, Copyleaks, Reality Defender, Sensity AI AI content detection, deepfake identification, provenance tracking, C2PA metadata embedding Detection, not prevention. Don't build the verification architecture that stops hallucinations before they reach production.
Big 4 / Large SIs Deloitte, KPMG, PwC, Accenture Board-level AI strategy, ISO 42001 certification support, regulatory advisory, large-scale program management Advise on frameworks but typically don't build custom verification systems. Engagements run $500K-$5M+. Recommend platforms rather than building bespoke architecture.
Custom Verification (Veriprajna) Veriprajna Claim substantiation audits, AIBOM engineering, knowledge graph verification layers, content verification pipelines, regulatory mapping across jurisdictions Not a platform. Each engagement is custom. Not suited for organizations that just need a governance dashboard.

Most enterprises need a combination: a governance platform for portfolio management and policy, a specialized consultancy for the architecture and substantiation work underneath. The platform tracks that your AI system needs a fairness assessment. The architecture work builds the system that passes it.

What We Build

Each capability addresses a specific enforcement risk. We build these as custom systems integrated into your existing stack, not as off-the-shelf modules.

AI Claim Substantiation Audits

We inventory every public AI claim your organization has made: 10-K disclosures, website copy, press releases, investor presentations, marketing materials. Then we map each claim to the specific system component that delivers it and test whether the claim is accurate.

The output is an audit-ready evidence binder organized by claim, with technical documentation, operational validation results, and gap analysis. Your legal team can hand this to an SEC examiner without a scramble.

Approach: We use the same claim-to-reality comparison methodology the SEC applies in examinations. If Presto's auditors had done this before the 10-K filing, they would have caught the 70%+ human intervention rate before the SEC did.

AIBOM Engineering

We build machine-readable AI Bills of Materials integrated directly into your CI/CD pipeline. When your model version changes, a dependency updates, or training data is refreshed, the AIBOM updates automatically. No spreadsheets. No annual manual inventories that are stale by the time they're completed.

We work with both SPDX 3.0 (AI profile, released October 2024) and CycloneDX 1.6 (ML-BOM support). The choice depends on your existing SBOM tooling and regulatory requirements.

Approach: We reach for OWASP's AIBOM framework as the structural foundation and extend it with regulatory metadata fields mapped to Colorado AI Act impact assessment requirements and EU AI Act GPAI transparency obligations.

Content Verification Architecture

For enterprises producing AI-generated content (financial analyses, compliance reports, client communications, marketing materials), we build the verification layer that prevents hallucinations from reaching production. This is knowledge graph grounding with citation enforcement: the AI cannot output a claim unless it can trace it to a verified source in the graph.

The architecture uses graph-constrained decoding rather than post-hoc fact-checking. Post-hoc checking catches errors after generation. Graph-constrained generation prevents them structurally.

Approach: We build domain-specific knowledge graphs with edge types that capture relationships standard vector retrieval misses. In financial content: SUPERSEDES, RESTATES, CORRECTS. In legal content: OVERRULES, AFFIRMS, DISTINGUISHES. The graph structure prevents the AI from citing overruled precedent as current law.

Multi-Jurisdiction Compliance Mapping

Your AI systems face enforcement from the SEC, FTC, DOJ, at least six states with new AI laws (Colorado, Texas, California, New York, Illinois, Utah), and the EU AI Act if you serve European customers. Each has overlapping but non-identical requirements.

We build a unified compliance architecture: one documentation framework, one assessment methodology, one monitoring infrastructure that satisfies all applicable requirements. Not six separate compliance programs.

Approach: We start with NIST AI RMF as the structural backbone (it provides the affirmative defense under Colorado SB 205), layer ISO 42001 control requirements for organizations pursuing certification, and map jurisdiction-specific obligations into the framework as regulatory overlays.

AI Technical Due Diligence

For M&A transactions, VC reviews, board reporting, or pre-IPO readiness: independent technical assessment of whether AI systems perform as represented. We conduct both black-box testing (does the system meet its stated requirements from a user perspective?) and, where access permits, white-box analysis (model architecture, training methodology, dependency review).

The deliverable is an independent assessment report that addresses the specific questions investors, acquirers, or board members are asking. Not a framework overview. A verdict on whether the AI claims are substantiated, with evidence.

Approach: We evaluate against the four criteria the SEC uses: (1) are representations fair and accurate, (2) do operations match disclosures, (3) do AI outputs align with stated strategies, (4) are controls adequate. The same standard an examiner applies, but conducted proactively.

How We Work

Every engagement starts with understanding your specific exposure. The scope depends on whether you need a pre-examination substantiation package, a content verification system, or a comprehensive compliance architecture.

Phase 1

AI Claim Inventory

We catalog every public AI claim across all channels: SEC filings, website, press releases, pitch decks, marketing materials. Each claim is tagged by regulatory surface (SEC, FTC, state, EU).

Typical: 2-3 weeks

Phase 2

Gap Analysis

We test each claim against technical reality. Where documentation exists, we validate it. Where it doesn't, we flag the gap. The output is a prioritized risk map: which claims carry the highest enforcement exposure with the weakest substantiation.

Typical: 3-4 weeks

Phase 3

Build & Remediate

We build what's missing: substantiation packages, AIBOM pipelines, verification architecture, compliance documentation. For content systems, this includes the knowledge graph and validation layers. For claims, this means revising language or building evidence to support it.

Typical: 6-12 weeks (varies with scope)

Phase 4

Continuous Validation

We deploy automated monitoring that flags when system behavior drifts from documented claims. Weekly test suites compare actual AI performance against substantiation package assertions. AIBOM stays synchronized with production. Compliance mapping updates as regulations evolve.

Ongoing, with quarterly reviews

Honest Caveats

  • We can't make false claims true. If your AI genuinely doesn't do what your marketing says, the remediation is either building the capability or revising the claims. We'll tell you which path is faster and cheaper.
  • ISO 42001 certification takes time. For large enterprises starting from scratch, expect 6-12 months and $90K-$200K+ in year one. We can accelerate with existing ISO 27001 overlap (40-50% time reduction), but there are no shortcuts to a legitimate certification.
  • Content verification architecture requires domain investment. Building a knowledge graph for financial, legal, or medical content is labor-intensive. Typical timeline is 3-6 months to production-ready for a single domain. This is the hardest and most valuable piece of the architecture.
  • Regulatory landscape is shifting. The Trump administration's December 2025 executive order proposes federal preemption of state AI laws. Until courts rule, state laws remain enforceable. We design for the most conservative interpretation and adjust as clarity emerges.

AI Claim Risk Assessment

Evaluate your organization's exposure to AI washing enforcement. Answer these questions about your AI claims and documentation to get a preliminary risk profile. This assessment is based on the enforcement patterns from SEC, FTC, and state AG actions.

1. Do you maintain an inventory of every public AI claim your organization has made (10-K, website, press releases, pitch decks)?

2. For each AI claim, can you produce technical documentation proving the system does what you say it does?

3. Do you use third-party AI APIs and repeat the vendor's capability claims in your own materials?

4. Do you have an AI Bill of Materials (AIBOM) tracking training data, model versions, and third-party dependencies?

5. Does your AI generate content distributed to clients, investors, or the public?

6. Are you subject to Colorado AI Act, Texas RAIGA, or similar state AI laws taking effect in 2026?

Questions GCs and CCOs Are Asking

How do we substantiate AI claims for SEC compliance?

SEC examiners under the 2026 priorities are checking whether your operations match your disclosures. Substantiation requires three layers of evidence. First, a technical documentation package that maps every public AI claim to the specific system component that delivers it. If your 10-K says you use machine learning for portfolio optimization, the package must show the model architecture, training methodology, input data sources, and performance metrics that prove the claim.

Second, operational evidence showing the AI actually influences decisions. Presto Automation's failure was claiming AI eliminated human order-taking when 70%+ of orders required human intervention. The SEC doesn't just ask "do you have AI?" They ask "does the AI do what you said it does, and can you prove it?"

Third, an ongoing monitoring framework. A substantiation package that was accurate at filing time but becomes stale is still a liability. We build continuous validation pipelines that flag when system behavior drifts from documented claims. This includes automated testing suites that run against your AI systems weekly, comparing actual performance metrics against the specific claims in your disclosures. The output is an audit-ready evidence binder that your legal team can hand to an examiner without scrambling.

What is an AI Bill of Materials and do we need one?

An AI Bill of Materials (AIBOM) is a machine-readable inventory of every component in your AI system: training datasets with lineage documentation, base models with version history, third-party libraries and their licenses, infrastructure specifications, and governance metadata. Think of it as a nutrition label for AI.

The standards landscape is converging around two formats: SPDX 3.0 (which added an AI profile in October 2024) and CycloneDX 1.6 (which added ML-BOM support). OWASP launched a formal AIBOM project with tooling in late 2025.

You likely need one if you operate in any of these scenarios: your AI systems touch regulated decisions (lending, hiring, healthcare), you make public claims about AI capabilities that regulators could challenge, you're subject to EU AI Act GPAI transparency obligations (effective August 2025 for general provisions), or you're preparing for Colorado AI Act compliance (effective June 2026) which requires impact assessments that an AIBOM directly supports. Most enterprises today track AI components in spreadsheets or not at all. We build AIBOMs integrated into your CI/CD pipeline so they stay synchronized with production. When your model version changes or a dependency updates, the AIBOM updates automatically. The practical value isn't just regulatory defense. It's knowing exactly what's in your AI stack when an incident happens, when an auditor asks, or when you need to trace a hallucination back to its source.

How does the SEC CETU unit investigate AI washing?

The Cybersecurity and Emerging Technologies Unit (CETU) was created in February 2025 specifically to handle AI-related enforcement. Based on the Delphia, Global Predictions, Presto, and Nate cases, the investigation pattern is consistent. CETU starts with your public representations: website copy, SEC filings, investor presentations, press releases, and social media. They compare these claims against technical reality through document requests and examinations.

The specific areas they probe include whether the AI technology described in marketing materials actually exists and is deployed in production, whether the AI influences the decisions or outcomes you claim it influences (Presto said AI eliminated human intervention when it hadn't), whether performance metrics you cite are based on actual system measurements or projections, and whether third-party AI components are properly disclosed rather than presented as proprietary capability.

The Nate case is particularly instructive. The founder claimed AI automation rates above 90% when the actual rate was essentially zero, with hundreds of manual contractors in the Philippines processing transactions. The SEC and DOJ filed parallel actions, and the criminal charges carry up to 20 years. CETU doesn't require new AI-specific legislation to pursue these cases. They use existing antifraud statutes: Section 206(2) of the Advisers Act, the Marketing Rule, and Section 17(a) of the Securities Act. The legal theory is straightforward. If you said it and it's not true, that's fraud.

What's the difference between AI governance platforms and what Veriprajna does?

Platforms like Credo AI, IBM watsonx.governance, and OneTrust AI Governance are monitoring and policy management tools. They help you inventory AI systems, assign risk levels, track policy compliance, and generate reports. They're valuable for ongoing governance operations.

What they don't do is build the verification architecture underneath. A governance platform can tell you that your content generation system is flagged as high-risk and needs a fairness assessment. It cannot build the knowledge graph grounding layer that prevents that system from hallucinating in the first place. It cannot produce the technical substantiation package that proves your 10-K claims are accurate. It cannot construct the AIBOM pipeline that keeps your component inventory synchronized with production.

Think of it this way: a governance platform is the dashboard. We build the engine it monitors. In practice, most enterprises need both. The platform manages the portfolio view, policies, and reporting workflows. The custom verification architecture under each AI system is what makes the claims defensible. We work alongside your existing governance tooling, not instead of it. We also handle the bespoke work that platforms cannot automate: claim-by-claim substantiation audits, custom verification pipelines for specific AI systems, and the integration work that connects your AI architecture to your compliance documentation chain.

How do we prepare for the Colorado AI Act and other state AI laws taking effect in 2026?

Colorado SB 205 takes effect June 30, 2026, and it's the most prescriptive state AI law to date. If you deploy high-risk AI systems that make or substantially influence consequential decisions (employment, lending, insurance, housing, education, healthcare, legal services), you need a risk management policy and program, an impact assessment for each high-risk system before deployment and annually thereafter, consumer notification when AI makes consequential decisions, a mechanism for consumers to correct data and appeal decisions with human review, and documentation sufficient to demonstrate reasonable care.

The penalty is up to $20,000 per violation, enforced by the Colorado AG. There's an affirmative defense if you follow NIST AI RMF or equivalent framework and discover/cure violations proactively. Texas is different but parallel. The Responsible AI Governance Act (effective January 2026) gives the AG broad civil investigative demand power from a single complaint. New York's AI laws authorize AG enforcement at $15,000 per day per violation for certain AI applications.

The practical challenge is that these laws have overlapping but non-identical requirements. We build a unified compliance architecture that satisfies all applicable state requirements through a single documentation and assessment framework, rather than maintaining separate compliance programs for each jurisdiction. This starts with an AI system inventory, maps each system to applicable state requirements, identifies gaps, and builds the assessment and monitoring infrastructure to maintain compliance as both your AI systems and the regulatory landscape evolve.

Can we handle AI verification internally or do we need outside help?

It depends on what you mean by verification. If you have a mature compliance team, in-house ML engineers who understand your AI systems deeply, and legal counsel experienced with SEC and FTC AI enforcement precedent, you can build much of the framework internally. The NIST AI RMF is free and provides a solid foundation. OWASP's AIBOM generator is open source. ISO 42001 has detailed control requirements you can implement without a consultant.

Where internal teams typically hit limits: first, the substantiation gap. Your engineering team built the AI system. They may not be the right people to objectively document whether it matches marketing claims, because they're often the ones who briefed marketing in the first place. An independent assessment carries more weight with examiners. Second, cross-domain expertise. AI verification sits at the intersection of ML engineering, securities law, compliance operations, and regulatory affairs. Few internal teams have depth in all four. Third, the architecture problem. Governance platforms manage policies. But building a citation-enforced retrieval system, a knowledge graph verification layer, or a continuous claim validation pipeline requires specialized AI architecture work that's different from your core product engineering.

Fourth, speed. If enforcement risk is imminent, like a 10-K filing deadline, a shareholder demand letter, or an SEC examination notice, internal teams rarely have the capacity to build a substantiation package from scratch while maintaining normal operations. The honest answer: start internal. Inventory your AI claims. Map them to systems. Identify where documentation is missing. That exercise alone reveals whether the gaps are manageable internally or require specialized build work.

Technical Research

The research behind this solution page. These interactive whitepapers provide the technical depth underlying our approach to AI verification and anti-AI-washing compliance.

The Median AI Securities Settlement Is $11.5 Million

A substantiation audit costs a fraction of that. Start with a claim inventory.

The SEC's CETU unit, the FTC's Operation AI Comply, and state AGs with new enforcement tools are all asking the same question: can you prove your AI does what you say it does? We build the evidence that answers yes.

AI Claim Substantiation Audit

  • ▸ Complete AI claim inventory across all public channels
  • ▸ Claim-to-system mapping with technical validation
  • ▸ Gap analysis prioritized by enforcement exposure
  • ▸ Audit-ready evidence binder for SEC/FTC examination

Verification Architecture Build

  • ▸ AIBOM engineering with CI/CD pipeline integration
  • ▸ Content verification with knowledge graph grounding
  • ▸ Multi-jurisdiction compliance mapping (SEC/FTC/state/EU)
  • ▸ Continuous validation and drift monitoring deployment