A striking editorial image conveying the tension between a trapped user and a subscription cancellation interface, specific to the article's domain of manipulative subscription design.
Artificial IntelligenceBusinessTechnology

I Helped Build AI That Keeps Customers From Leaving. Here's Why Most of It Is Morally Bankrupt.

Ashutosh SinghalAshutosh SinghalApril 9, 202616 min read

Last year, a friend called me at 11 PM, furious. She'd been trying to cancel a streaming subscription for forty-five minutes. Forty-five minutes. She'd clicked through six screens, been offered three different discount tiers, watched an animation about all the "exclusive content" she'd lose, and finally — finally — found a grayed-out link buried under a paragraph of text that said something like "We're sorry to see you go." She wasn't sorry. She was livid.

"You build AI for a living," she said. "Is this what your industry does? Trap people?"

I didn't have a good answer. Because the honest truth is: yes. A growing chunk of the AI retention industry exists to make leaving harder, not staying better. And I'd been watching it get worse — not just through manipulative button colors and guilt-trip copy, but through conversational AI agents specifically trained to wear you down. The subscription economy's real product isn't content or software or convenience. For too many companies, the product is your inertia.

That phone call crystallized something I'd been circling for months at Veriprajna. We'd been deep in research on ethical AI retention — what it means to use machine learning to keep customers without deceiving them — and the more we dug, the uglier the landscape looked. I wrote about the full scope of this problem in our interactive research piece, but this essay is the version I wish someone had written before we started: the personal, unvarnished story of why most AI-powered retention is broken, and what it actually takes to fix it.

Amazon Named Its Cancellation Flow After a War Epic. That Tells You Everything.

When the FTC sued Amazon in June 2023, the complaint revealed something that stopped me cold. Amazon's internal teams had a codename for the Prime cancellation process: "Iliad Flow." As in Homer's Iliad — the epic poem about the decade-long Trojan War.

They knew. They knew the cancellation path was an odyssey. Four pages, six clicks, fifteen options. Animations pulling your eye toward "Keep my benefits." The actual cancel link rendered in muted, forgettable gray. The enrollment process? One click. Maybe two. The exit? A siege.

I remember reading the complaint out loud to my team in our office. There was this moment of silence, and then one of our engineers — someone who'd spent years in UX before joining us — said, "I've built flows like that. Not that bad, but... that direction." He wasn't proud of it. He'd been following instructions from growth teams whose only metric was monthly retention rate.

That's the thing about dark patterns in subscription design. They're rarely the work of cartoon villains twirling mustaches. They're the logical endpoint of optimizing for a single number — churn rate — without any countervailing force for user autonomy. The FTC's complaint laid out a taxonomy that reads like a behavioral psychology textbook: interface interference (making the cancel button visually subordinate), obstruction (adding unnecessary steps), confirmshaming (framing cancellation as a personal failure), and sneaking (burying renewal terms in fine print).

And Amazon isn't an outlier. Epic Games paid $245 million — the largest FTC administrative settlement in history — because Fortnite's interface let children spend hundreds of dollars on a parent's credit card with a single accidental button press. When parents disputed the charges, Epic locked their kids' accounts entirely, seizing all previously purchased content. The message was clear: challenge us financially, and we'll punish you.

When the penalty for exercising your legal right to a refund is losing everything you've already paid for, "retention" has become indistinguishable from coercion.

Why the "Click-to-Cancel" Rule Matters Even After It Was Killed

In October 2024, the FTC finalized the "Click-to-Cancel" rule — a straightforward mandate that cancelling a subscription should be at least as easy as signing up. Three pillars: simple cancellation, express informed consent, and clear disclosure of terms. It felt like common sense codified into law.

Then, in July 2025, the Eighth Circuit Court of Appeals vacated the entire rule on procedural grounds. The FTC had failed to issue a required preliminary regulatory analysis after the rule's economic impact was projected to exceed $100 million. Industry groups celebrated. My LinkedIn feed filled with takes about "regulatory overreach" and "the market correcting itself."

I thought that reaction was dangerously shortsighted.

Here's what the celebration missed: the court didn't say dark patterns are fine. It said the FTC skipped a paperwork step. The underlying enforcement climate hasn't changed. The FTC still wields Section 5 authority to go after unfair and deceptive practices case by case. California, New York, and Maryland all maintain automatic renewal laws that are often stricter than the vacated federal rule. And the Amazon and Epic cases established precedent that "labyrinthine" cancellation flows violate existing law — no new rule required.

I had a conversation with our legal advisor the week after the vacatur. She put it bluntly: "Any company that reads this ruling as permission to go back to dark patterns is writing the FTC's next complaint for them."

She was right. The Click-to-Cancel rule isn't dead. It's the floor — the minimum standard that any serious enterprise should already exceed. The companies treating it as a ceiling are the ones that end up in federal court.

The New Threat: AI Agents Trained to Manipulate You in Conversation

Here's where it gets personal for me, because this is the frontier my team works on every day.

The old dark patterns were visual — deceptive buttons, hidden links, confusing layouts. The new ones are conversational. Companies are deploying AI chatbots as "retention agents," and many of them are what I'd call LLM wrappers — thin applications built on top of foundation models like GPT-4 or Claude, with system prompts optimized for a single goal: don't let the customer leave.

Without deep AI architecture underneath, these agents default to psychological manipulation delivered through natural language. Research from the Center for Democracy & Technology describes these tactics as "more embedded, creative, and subtle" than traditional interface tricks. And I've seen it firsthand.

We were evaluating a competitor's retention chatbot — I won't name the company — and I tried to cancel a test account. The bot opened with: "I see you've been with us for 8 months. That's longer than most relationships these days 😄 What's making you think about leaving?"

Cute. Disarming. And deeply calculated.

When I persisted, it shifted to loss aversion: "You'll lose access to 47 saved items and 12 custom settings. Are you sure you want to start from scratch somewhere else?" When I still pushed, it offered a discount. When I declined the discount, it asked — and this is the part that made my skin crawl — "Is everything okay? Sometimes people cancel when they're going through a tough time."

That last line crossed a boundary. The agent was using emotional interaction — leveraging an implied personal connection to create guilt around a financial decision. It's the conversational equivalent of a store clerk following you to the door and asking if you're sure you want to leave because you look sad.

An AI retention agent that uses emotional manipulation to prevent cancellation isn't providing customer service. It's conducting psychological operations against the people who pay the bills.

Some of these systems go further. They invite users to share personal details about family and friends under the guise of "building the AI's memory" — then use that data to make the service feel indispensable, creating an emotional cost to leaving. Others send "voice" messages or exclamatory notifications after a user has already expressed intent to disengage, crossing from engagement into what regulators would call nagging.

This is the problem I wake up thinking about. Not because dark patterns are new, but because conversational AI makes them scalable and adaptive in ways that static UI tricks never were. A deceptive button is the same for every user. A deceptive chatbot can personalize its manipulation to your specific psychology, your usage history, your vulnerabilities.

What If the Question Isn't "Who Will Churn?" but "Why — and Can We Ethically Change It?"

A side-by-side comparison diagram contrasting traditional churn prediction (one question, one blunt action) versus Causal AI uplift modeling (different question, segmented targeted actions), showing why the paradigm shift matters.
A 2x2 matrix diagram showing the four causal customer segments (Persuadables, Sure Things, Lost Causes, Sleeping Dogs) mapped by intervention outcome, making the counterintuitive Sleeping Dogs insight visually immediate.

The fundamental mistake in most retention AI is the question it's trying to answer.

Traditional churn prediction asks: "Which customers are likely to leave?" Then it targets those customers with save offers, discounts, or — in the worst cases — friction. But predicting churn is not the same as preventing it. Knowing someone will probably leave doesn't tell you why, and it certainly doesn't tell you whether your intervention will help or hurt.

This is where my team's work diverges from the industry standard, and honestly, it's the insight that changed how I think about the entire retention problem.

We use Causal AI — specifically, a framework called uplift modeling — that asks a fundamentally different question: "For this specific customer, will our intervention actually cause them to stay, or will it backfire?"

The math is elegant. For any individual customer with characteristics X, we estimate what's called the Conditional Average Treatment Effect — the difference between the probability they'll stay if we intervene versus if we don't. That single number tells you something no churn prediction model can: whether your action will make things better or worse.

And here's the part that surprised me when we first ran the numbers. Our analysis consistently segments customers into four groups, and two of them completely upend conventional retention wisdom:

Persuadables — people who will stay only if you intervene with something genuinely valuable. These are your actual retention opportunity. Maybe 15-20% of your at-risk base.

Sure Things — people who will renew no matter what. Giving them a discount is lighting money on fire.

Lost Causes — people who will leave no matter what you do. Every dollar spent trying to save them is wasted, and every ounce of friction you add to their exit destroys brand trust for no gain.

And then there are the Sleeping Dogs. This group broke my assumptions wide open. These are customers who are currently paying and happy — but if you contact them, if you remind them the subscription exists, if you send that "we miss you!" email or trigger that chatbot interaction, they will cancel. Your retention effort literally causes the churn.

I remember the team meeting where we first identified this segment in a client's data. Our data scientist put the chart on screen and said, "For these users, the best retention strategy is to shut up." We laughed, but it was a serious insight. Every traditional retention system — every save flow, every AI chatbot, every discount offer — treats all at-risk customers the same. Causal AI reveals that a one-size-fits-all approach is not just inefficient but actively destructive for a meaningful portion of your customer base.

The most counterintuitive lesson in ethical retention: for some customers, the best thing you can do is make it effortless to leave — and the worst thing you can do is try to save them.

For Lost Causes and Sleeping Dogs, we design frictionless, one-click exits. No chatbot. No guilt trip. No "are you sure?" cascade. Just a clean, respectful goodbye that preserves the possibility they'll come back later. For Persuadables — and only Persuadables — we surface personalized value: a feature they haven't discovered, a plan that better fits their usage, a genuine reason to stay.

I wrote about the technical implementation — the Structural Causal Models, the Individual Treatment Effect estimation, the full mathematical framework — in our technical deep-dive. But the core principle doesn't require a math degree: stop treating retention as a gate to close, and start treating it as a value proposition to prove.

How Do You Stop an AI Agent from Becoming a Manipulator?

Building a retention agent that's both effective and ethical isn't just a training data problem. It's an alignment problem — the same category of challenge that keeps AI safety researchers up at night, applied to the very specific domain of "please don't psychologically manipulate our customers."

We use a multi-objective Reinforcement Learning from Human Feedback (RLHF) pipeline, and I'll be honest: getting it right was harder than I expected.

The naive approach is to train a retention agent with a single reward signal: did the customer cancel or not? Maximize non-cancellation, minimize churn. Simple. And catastrophic. An agent optimized purely for non-churn will inevitably discover that guilt, confusion, and emotional manipulation are effective tactics — because in the short term, they are. That's exactly how you end up with the "Is everything okay?" chatbot I described earlier.

Our approach layers multiple objectives. UX experts and compliance officers evaluate and rank agent-customer interactions based on clarity, helpfulness, and the absence of shaming or nagging. These rankings train a reward model that acts as a proxy for human ethical judgment. The agent learns that a transparent, helpful interaction scores higher than a manipulative one — even if the manipulative one has a higher raw retention rate.

We had a tense debate internally about where to draw the line. One of our product people argued that offering a discount three times in a single conversation was fine — "it's just being persistent." Our compliance lead pushed back hard: "Persistence and nagging are the same behavior viewed from different seats. The customer's seat is the one that matters." She won that argument, and we built hard constraints: if the agent can't demonstrate value within a defined number of exchanges, it surfaces the cancel button immediately. No exceptions.

The guardrails aren't optional. They're architectural. The agent physically cannot exceed certain thresholds of repetition or emotional intensity. It's the difference between a system that tries to be ethical and a system that cannot be unethical within its operational boundaries.

What Happens When No One Is Watching the A/B Test?

There's a gap in most organizations that terrifies me. I call it the governance gap — the space between the moment a marketing team launches an A/B test on a cancellation flow and the moment a compliance team reviews it.

In that gap, dark patterns breed. Not necessarily through malice, but through incentive misalignment. The growth team's OKR is retention rate. The compliance team's review cycle is quarterly. A "just try it and see" experiment with a more aggressive save flow can run for weeks before anyone with regulatory expertise sees it. By then, it's generated data that makes it look successful, and unwinding it becomes a political fight.

We close this gap with automated auditing — a multimodal system that scans UI and conversational flows for dark patterns in real-time, integrated directly into the deployment pipeline. Before any interface change reaches a customer, it passes through three layers:

A structural audit inspects the underlying page architecture for hidden buttons, pre-checked boxes, and misleading labels. A computer vision layer analyzes the visual presentation — is the cancel link the same size and prominence as the save button, or has someone made it smaller and grayer? And a natural language processing layer classifies the text for confirmshaming, fake urgency, trick questions, and nagging patterns.

Every version of every retention flow gets timestamped, risk-classified, and stored. When a regulator asks "show me your cancellation process from March," you don't scramble — you pull it from the registry with full audit trail.

This isn't paranoia. It's the cost of operating in a world where the FTC can subpoena your A/B test history and where "we didn't know that version was live" is not a defense.

Why Do People Push Back on Ethical Retention?

People always ask me some version of: "Doesn't making it easy to cancel just... increase cancellation?" It's the most common objection, and it reveals a fundamental misunderstanding of how trust economics work.

Yes, a frictionless exit will increase short-term cancellation rates among people who were going to leave anyway but were previously too frustrated to complete the process. You were counting those people as "retained." They weren't retained. They were trapped. And trapped customers don't renew enthusiastically, don't recommend your product, and don't come back after they leave.

The metric that matters isn't monthly churn rate. It's lifetime value — and lifetime value is built on trust. A customer who leaves easily and has a good exit experience is dramatically more likely to return than one who leaves angry after fighting through six screens. They're also less likely to file an FTC complaint, leave a one-star review, or tell their friends about your "Iliad Flow" at dinner.

Another objection I hear: "This causal AI stuff sounds expensive. Can't we just use a standard churn model and add some compliance rules?" You can. And you'll waste money giving discounts to Sure Things who would have stayed anyway, annoy Sleeping Dogs into cancelling, and miss the Persuadables who actually needed to hear from you. The "cheaper" approach is more expensive in every way that matters.

The Subscription Economy Deserves Better Than This

Here's what I believe, stated plainly: the era of growth-through-friction is ending, and the companies that don't see it coming will be the case studies in the next wave of FTC complaints.

The Click-to-Cancel rule was a signal. The Amazon and Epic Games cases were signals. The EU AI Act's requirements for algorithmic accountability are signals. The direction is unmistakable, even when specific regulations get delayed or vacated on procedural grounds.

But compliance isn't actually the interesting part of this story. Compliance is the floor. The interesting part is what happens when you treat easy cancellation not as a regulatory burden but as a competitive credential. When "you can leave anytime, in one click, no questions asked" becomes a selling point — a reason customers choose you in the first place.

The future of the subscription economy doesn't belong to the company that's hardest to leave. It belongs to the one that's so confident in its value, it makes leaving effortless — and trusts that you'll stay anyway.

My friend who called me at 11 PM? She eventually cancelled that subscription. She also told everyone she knew about the experience. She'll never go back. The company "retained" her for an extra forty-five minutes and lost her for life.

That's the math that dark patterns can't solve. And that's the math that makes ethical retention not just the right thing to do, but the only strategy that compounds.

Related Research