
64 Million People Applied for a Job. A Password of "123456" Gave Away Their Secrets.
I was on a call with a prospective client — a mid-size logistics company — when the McHire story broke. My co-founder pinged me a link mid-sentence. I glanced at it, read the first two lines, and went completely silent. The client asked if I was still there.
"Sorry," I said. "I just read that McDonald's AI hiring platform — the one that screens millions of applicants — was protected by the password '123456.' And someone just walked in."
There was a long pause. Then the client said: "That's basically our setup."
He was half-joking. But only half.
The McHire breach of June 2025 exposed the personal data of approximately 64 million job seekers — names, emails, phone numbers, IP addresses, chat transcripts with an AI recruiter named "Olivia," and, most disturbingly, their personality test results. The vector wasn't a sophisticated nation-state attack. It wasn't a zero-day exploit that required a team of elite hackers. It was a default admin password that had sat unchanged since 2019, on an account with no multi-factor authentication, guarding an API that let anyone iterate through applicant IDs in a browser's address bar.
When I tell people what we do at Veriprajna — building AI systems with security and governance baked into the architecture — I sometimes get the polite nod that means "sure, but isn't that overkill?" The McHire breach is my answer. It's not overkill. It's the bare minimum. And most companies aren't even doing that.
What Actually Happened Inside the McHire Platform?

The breach wasn't discovered by a threat intelligence team or a government agency. It started with two security researchers — Ian Carroll and Sam Curry — who noticed something mundane: users were complaining that the "Olivia" chatbot was buggy. The front-end experience was clunky and unreliable.
That detail matters. In my experience, a broken front end is almost always a signal. If a company hasn't invested in the part users see, imagine what's happening in the parts they don't.
Carroll and Curry started poking around and found a management portal intended for Paradox.ai employees — the vendor that built and operated McHire on McDonald's behalf. They tried a test account. The username? "123456." The password? "123456." It worked.
I remember reading that and feeling a specific kind of anger that anyone who's ever built production systems will recognize. It's not surprise — it's the fury of knowing that this was entirely preventable. This wasn't a subtle misconfiguration in a Kubernetes cluster. It was the digital equivalent of leaving the vault door open with a Post-it note that says "key under mat."
But the password was only stage one. Once inside, the researchers discovered an Insecure Direct Object Reference vulnerability — an IDOR, in security parlance. This meant the API didn't verify whether a logged-in user was actually authorized to see a specific applicant's data. By changing the applicant ID number in the URL — literally just incrementing a number — they could pull up the complete records of any applicant in the system.
Sixty-four million of them.
Why Personality Test Data Is the Worst Kind of Data to Leak
Here's where most coverage of this breach gets it wrong. The headlines focused on the password — "123456," ha ha, how dumb — and moved on. But the real catastrophe isn't the credential. It's what was behind it.
Credit card numbers can be canceled. Passwords can be changed. But personality assessment results? Behavioral screening scores? The transcripts of a conversation where an AI probed your temperament, your emotional responses, your conflict style?
That data is you. It doesn't expire.
When a personality profile leaks, you can't rotate it like a password. Your psychometric fingerprint follows you forever.
I spent a late night after the breach reading research on the psychological impact of data exposure. The numbers are staggering: nearly 70% of breach victims report a persistent inability to trust others. Two-thirds experience profound helplessness. Studies have linked personal data exposure to anxiety, depression, and PTSD. And the severity scales with the intimacy of the data — a leaked email address stings; a leaked personality assessment that says you're "emotionally unstable" or "low conscientiousness" can feel like a public dissection.
For job seekers — many of them young, many applying for their first job at a fast-food chain — this is especially cruel. They submitted to a personality test because an AI told them to. They had no meaningful way to understand what data was being collected, how it was stored, or who could access it. And now that data is out there, potentially forever, in a world where future employers, insurers, or bad actors could use inferred traits against them.
My team had an argument about this. One of our engineers said, "Look, the data was exposed but probably not actually exfiltrated at scale — the researchers reported it responsibly." And technically, that's true. Paradox patched the vulnerability within hours of notification. But I pushed back hard. The point isn't whether this specific dataset ended up on a dark web forum. The point is that the architecture allowed it. The system was designed in a way where a default password and a browser were sufficient to access the psychometric profiles of 64 million people. That's not a near-miss. That's a design philosophy failure.
The Developer in Vietnam and the Password That Unlocked Everything
There's a subplot to this story that didn't get enough attention. Investigations revealed that a Paradox.ai developer based in Vietnam had been compromised by a malware strain called Nexus Stealer — a credential-theft tool sold on cybercrime forums. The infection exfiltrated hundreds of passwords from the developer's device. Many of them were poor and recycled, using the same base seven-digit password across multiple services.
That single compromised developer exposed credentials associated with Paradox.ai accounts for clients including Pepsi, Lockheed Martin, Lowe's, and Aramark.
I want you to sit with that for a moment. One person. One infected laptop. One reused password. And suddenly the hiring data for some of the largest employers in America is at risk.
This is what I call the "human node" problem, and it's the thing that keeps me up at night far more than any exotic AI attack vector. You can build the most sophisticated model in the world, fine-tune it on pristine data, wrap it in guardrails — and then a single developer's password hygiene collapses the entire house of cards. The average cost of a data breach in 2025 hit $4.44 million. But organizations keep treating identity management as an afterthought, something the IT team handles with an annual training video nobody watches.
The security of your AI system is never stronger than the weakest human credential in the chain.
At Veriprajna, we've built our architecture around the assumption that human access is a high-risk vector requiring continuous verification — what the industry calls Zero Trust. Not because we don't trust our team, but because I've seen what happens when you trust any single point of authentication to hold the line.
What Does "Deep AI" Actually Mean — and Why Should You Care?
I need to introduce a distinction that I think is the most important idea in enterprise AI right now, and one that the McHire breach illustrates perfectly: the difference between an AI Wrapper and what we call Deep AI.
An AI Wrapper is what most companies are actually building when they say they're "doing AI." It's a thin application layer — often a chatbot or a form — that sends user inputs to a foundational model like GPT-4 or Claude via an API, gets a response, and displays it. The AI is a service you're renting. Your application is the storefront. The security, the data management, the governance — that's all bolted on afterward, using the same web development practices you'd use for any CRUD app.
Paradox.ai's "Olivia" was, architecturally, a wrapper. A sophisticated one, sure. But the security posture was anchored to traditional web infrastructure — and that infrastructure failed at the most basic level imaginable.
Deep AI is fundamentally different. It treats the AI model as an architectural primitive — like a database or a message queue — with its own security boundaries, its own access controls, its own audit trails. The model isn't a black box you call; it's a component you govern. You build prompt routers, memory layers, feedback evaluators. You implement layered defenses that assume every input is hostile and every output is untrusted.
I wrote about this architectural philosophy in depth in the interactive version of our research, but the core insight is simple: if your AI security strategy is "we'll add auth and a WAF," you're building a wrapper, and you're one default password away from catastrophe.
The 5-Layer Defense Nobody Wants to Build

After the McHire news, I pulled my engineering team into a room and said: "Walk me through exactly how our stack would have prevented this." Not because I doubted them — but because I wanted to pressure-test every assumption.
We spent three hours on it. At one point, our lead security engineer drew a diagram on the whiteboard that looked like a medieval castle's cross-section — concentric rings of defense, each operating independently. If one falls, the next holds. Here's what that looks like in practice:
The outermost ring is input sanitization — every prompt, every API call gets stripped of anything that could be misinterpreted as an injection command. The second ring is heuristic threat detection, actively scanning for known adversarial patterns. The third is meta-prompt wrapping, where the user's request gets enclosed in a secure envelope of instructions the model can't override.
The fourth ring is where it gets interesting: canary and adjudicator models. A smaller model analyzes the request first. If it flags something suspicious, a second model makes the final call. It's a buddy system for AI — no single model gets to act unilaterally.
The fifth and innermost ring is output validation. Every response the AI generates is treated as untrusted until proven otherwise. PII redaction layers scan for sensitive data. Toxicity classifiers check for harmful content. Nothing gets through without being inspected.
Here's the thing that frustrated me during that whiteboard session: none of this is exotic. None of it requires a research breakthrough. It's engineering discipline applied to a new domain. The reason most companies don't do it is because it's expensive, it's slow, and it doesn't demo well. A wrapper chatbot can be built in a weekend and shown to a board on Monday. A properly governed AI system takes months. Guess which one gets funded.
The AI industry has a demo problem: the thing that impresses investors in a pitch is architecturally opposite to the thing that protects users in production.
Why Does the Law Treat Personality Data Like It's Radioactive?
A question I get from every CTO I talk to: "How bad is the legal exposure here, really?"
The answer: potentially existential.
Under the CCPA, a business can be sued if unencrypted personal information is stolen due to a failure to maintain "reasonable security procedures." Statutory damages are $750 per consumer per incident. Multiply that by 64 million records and you're looking at a theoretical liability of $48 billion. No court would award that full amount, but even a fraction is company-ending.
Under GDPR, the penalties cap at €20 million or 4% of global annual turnover — whichever is higher. And the EU AI Act, which classifies recruitment AI as "high-risk," introduces fines of up to €35 million or 7% of global turnover for non-compliance with mandatory risk assessments and human oversight requirements.
But here's what most legal analyses miss: the reputational damage is worse than the fines. I talked to a CHRO at a Fortune 500 company a few weeks after the breach. She told me her team had been evaluating AI hiring tools and had shortlisted three vendors. After the McHire story, the CEO killed the entire initiative. "We'll do it manually for another year," he said. "I'm not going to be the next headline."
That's the real cost. Not just to Paradox.ai, but to every legitimate AI company trying to build trust with enterprise buyers. One catastrophic breach poisons the well for everyone.
How Do You Actually Govern AI That Makes Decisions About People?
This is where I have to be honest about something uncomfortable: governance frameworks sound boring. ISO 42001, NIST AI RMF, OWASP Top 10 for LLMs — these are not the things that get founders excited at dinner parties. But they are the things that separate companies that survive regulatory scrutiny from companies that don't.
ISO 42001 is the world's first international standard for AI management systems. It requires organizations to identify AI-specific risks, establish clear objectives for transparency and safety, conduct impact assessments for each AI system, and maintain continuous monitoring through internal audits. It's not a checkbox exercise — it's a management system that forces you to think about AI governance the way you think about financial controls.
The NIST AI Risk Management Framework provides the policy anchor, organized around four functions: GOVERN, MAP, MEASURE, MANAGE. In the Paradox breach, the GOVERN function failed most conspicuously — there was no organizational accountability for decommissioning the stale admin account that had been sitting there since 2019.
And the OWASP framework — particularly its 2025 update for agentic AI — gives developers a ranked taxonomy of the most critical vulnerabilities. Agent Goal Hijack, where malicious content alters an agent's core behavior. Tool Misuse, where an agent is tricked into using a legitimate capability for a harmful purpose. Memory Poisoning, where bad data gets injected into a persistent agent's long-term memory.
For the full technical breakdown of how these frameworks intersect, including implementation specifics and a 90-day CXO roadmap, I've published a detailed companion paper. But the executive summary is this: by 2026, AI governance won't be optional. It will be a prerequisite for doing business with any enterprise that has a legal team.
"Can't We Just Add Security Later?"
People ask me this constantly. The answer is always the same, and it's always uncomfortable: no. You can't.
Security that's bolted on after the fact is security theater. It's a lock on a door that's already been removed from its hinges. The McHire breach proves this — Paradox.ai had authentication. They had an admin portal. They presumably had some security review process. But because security wasn't embedded in the architecture from day one, the entire system was only as strong as a password that a toddler could guess.
Another objection I hear: "But we use a major cloud provider. Isn't their security good enough?" The Paradox developer in Vietnam was compromised by commodity malware — not a cloud infrastructure exploit. Your cloud provider can have perfect security and your system can still be breached because a developer reused a password across services. The perimeter isn't where you think it is.
And then there's the one that makes me genuinely angry: "Our AI vendor handles security." This is exactly what McDonald's thought. They outsourced their AI hiring to Paradox.ai and, in doing so, outsourced their security posture to a vendor whose admin portal was protected by "123456." The supply chain is the security perimeter now. If you don't govern your vendors' AI infrastructure with the same rigor you apply to your own, you're not delegating risk — you're ignoring it.
The Thought I Can't Shake
Here's what I keep coming back to, weeks after the McHire story first broke.
Sixty-four million people — many of them teenagers, many applying for their first job — sat in front of a screen and answered questions from an AI chatbot. They shared information about themselves because the system told them to. They had no leverage, no negotiating power, no ability to say "actually, I'd rather not take a personality test to flip burgers." The power asymmetry was total.
And the system that held their data — their names, their behavioral profiles, the AI's assessment of their personality — was protected by the same password my daughter uses for her Roblox account.
We built AI systems that can assess human personality at scale. We just forgot to protect the humans.
This isn't a technology problem. It's a values problem. It's what happens when the industry treats AI as a product to ship rather than a system to govern. When "move fast and break things" meets "we're making automated decisions about people's livelihoods."
The era of the wrapper is over. The companies that survive the next wave of regulation, the next breach, the next public reckoning — they'll be the ones that built security into the foundation, not the ones that bolted it on after the headline. At Veriprajna, that's the only kind of AI we're willing to build. Not because it's easier. Because the alternative is indefensible.
The password "123456" should be a relic. The architecture that allowed it to matter should be extinct. And the 64 million people whose data was exposed deserve better than the industry's current definition of "good enough."


