
We Built a Fall Detection System That Can't See You Naked
My mother called me on a Tuesday night, and she wasn't calling about herself. She was calling about her neighbor — an 81-year-old woman who'd fallen in her bathroom, alone, and lay on the tile floor for nearly seven hours before anyone found her. The woman survived, but the hip fracture ended her independence. She moved into assisted living within the month.
"They offered her one of those camera systems," my mother told me. "She said she'd rather risk dying on the floor than have someone watching her in the bathroom."
That sentence broke something open in my head. Not because it was irrational — it was the most rational thing I'd heard in months. Here was a woman choosing the risk of death over the certainty of surveillance. And the entire elder care technology industry had nothing better to offer her.
This is the problem I set out to solve at Veriprajna. Not "how do we detect falls" — that's been solved a dozen times over with cameras and wearables. The real problem is harder: how do you keep someone safe in the most private moments of their life without destroying the privacy that makes life worth living?
The answer, it turns out, isn't a better camera. It's not a camera at all.
The Panopticon of Care
Let me give you the numbers that frame this crisis. Falls are the leading cause of injury-related death among adults over 65. In the United States alone, the annual healthcare cost of non-fatal falls reaches approximately $50 billion. A single fall with injury costs a care facility between $30,000 and $60,000 in medical expenses, liability, and increased care requirements.
But the statistic that haunts me isn't financial. It's behavioral. The fear of falling — not the fall itself — causes elderly people to restrict their own movement, withdraw socially, and decline physically at an accelerated rate. The monitoring is supposed to prevent that spiral. Instead, the monitoring often causes a different version of it.
I spent weeks visiting assisted living facilities early in our research. In one, I watched a resident cover her room's camera with a towel every time she changed clothes. The staff would come in and remove the towel. She'd put it back. This silent war over a piece of terrycloth was the entire privacy-safety dilemma in miniature.
The elder care industry built a Panopticon and called it compassion. Safety purchased at the cost of dignity isn't safety — it's a different kind of harm.
Cameras fail in other ways too. They need light, so they either don't work in the dark or require infrared illumination that disrupts sleep. They can't see through shower curtains or blankets — precisely the situations where falls are most dangerous. And wearable pendants? The compliance gap is devastating. Cognitive decline, forgetfulness, or just the discomfort of sleeping with a device on your wrist means the pendant is on the nightstand when the fall happens at 3 AM.
We needed something fundamentally different. Not a better version of surveillance, but a technology that was physically incapable of surveillance.
Why I Bet the Company on Invisible Waves
The first time someone on my team suggested millimeter-wave radar for fall detection, I thought it was overkill. Radar is what fighter jets use. It's what self-driving cars use to track vehicles at 200 meters. Using it to monitor an elderly person in a 12-by-14 bedroom felt like using a sledgehammer on a thumbtack.
Then I understood the physics, and I realized it was the opposite — it was the only tool precise enough for the job.
mmWave radar, specifically at 60 GHz, transmits electromagnetic waves and analyzes their reflections. It doesn't capture images. It can't reconstruct a face, a body shape, or anything visually recognizable. What it can do is detect motion with extraordinary precision — down to sub-millimeter displacements. That means it can detect the rise and fall of a chest wall from breathing. It can track the trajectory of a body moving through space. It can distinguish a person standing from a person lying on the floor.
And it does all of this through walls, in complete darkness, through shower curtains, through blankets.
There's an elegant physical property that sealed my conviction. The 60 GHz band sits within the oxygen absorption spectrum, which means the signals attenuate rapidly over distance and don't penetrate thick concrete walls effectively. The monitoring data is physically contained within the room. You couldn't leak it to the hallway even if you tried. Privacy enforced by the laws of physics, not the terms of a software agreement.
I wrote about the full technical architecture — the FMCW chirp mechanics, the 4D sensing paradigm, the signal processing chain — in our interactive whitepaper. But the core insight is simple: at 60 GHz with 4 GHz of bandwidth, you get roughly 3.75 cm of range resolution. That's enough to distinguish a person's limbs from their torso. Enough to tell the difference between a fall and a crouch. Enough to save a life. Not enough to identify a face.
Privacy by physics, not by policy. That became our design principle.
What Happens When You Try to Teach Radar to See a Fall?
Here's where I need to be honest about how hard this actually was.
The naive version of radar fall detection is straightforward: detect a sudden downward velocity followed by no movement at ground level. In a lab, this works beautifully. We had a prototype running within weeks that could detect a controlled fall onto a crash mat with near-perfect accuracy.
Then we put it in a real room.
The first deployment was in a test apartment we'd set up to simulate an assisted living unit. Within the first hour, the system flagged 14 falls. None of them were real. Three were the ceiling fan. Two were curtains moving near the air conditioning vent. One, memorably, was my colleague's golden retriever jumping off the couch.
I remember sitting in that apartment at midnight, staring at the spectrogram on my laptop, watching the ceiling fan create a perfect, repeating Doppler signature that our model had never been trained to ignore. My co-engineer looked at me and said, "Lab accuracy means nothing."
She was right. The gap between controlled experiments and real-world deployment — what I've started calling the "long tail of false alarms" — is where most AgeTech radar products die. A false alarm in a hospital isn't just annoying. It creates alarm fatigue. Nurses stop responding. And then the real fall happens, and nobody comes.
How Do You Teach AI the Difference Between a Fall and a Dog?
We attacked the false alarm problem on multiple fronts simultaneously.
For the ceiling fan, we built what we call microwave noise adaptive processing. The system learns the room. If high Doppler velocity is consistently detected at a fixed coordinate — say, the ceiling — that location gets masked from the fall detection logic. The AI learns that "fast movement at the ceiling is normal."
The pet problem was trickier and more interesting. A large dog jumping off furniture generates a Doppler signature uncomfortably similar to a falling human. Our solution combines radar cross-section analysis (humans reflect more electromagnetic energy than dogs) with geometric classification. A human point cloud is typically a vertical column. A dog is a horizontal blob. We added an explicit "Animal" class to our classifier, which felt absurd until it eliminated about 30% of our false positives.
A fall detection system that can't tell the difference between your grandmother and your Labrador isn't a fall detection system. It's an expensive noise machine.
For curtains and drafts, we implemented zone masking during installation and trained the deep learning classifier to recognize the low-frequency sinusoidal oscillation of fabric — which looks nothing like human motion once you know what to look for.
The AI Architecture Nobody Talks About

Most articles about AI in healthcare focus on the model. The transformer, the CNN, the latest architecture with a catchy name. But the model is maybe 20% of the problem. The other 80% is the signal processing pipeline that feeds the model — and the engineering required to run it all on a chip with 512 kilobytes of RAM.
Let me walk through what actually happens when our sensor detects a fall.
Raw electromagnetic reflections come in as analog signals. We digitize them and construct what's called a Radar Data Cube through a series of Fast Fourier Transforms — one across each chirp to resolve range, one across chirps to resolve velocity, one across antennas to resolve spatial angle. This gives us a 4D dataset: range, velocity, horizontal angle, and vertical angle. Every point in this space has an associated power intensity.
From this cube, we extract two parallel data streams. The first is a micro-Doppler spectrogram — essentially a velocity fingerprint over time. A person walking creates a distinctive pattern: steady torso movement with oscillating limb signatures. A fall creates a sudden broadband energy burst followed by silence. The second stream is a 3D point cloud — a set of spatial coordinates with velocity and signal strength for each detected target.
Here's where our approach diverges from most competitors. We don't pick one stream. We fuse them.
We built what we call a Dual-Stream Network. Stream A (the spectrogram) analyzes how fast things are moving. Stream B (the point cloud) analyzes where things are in space. A fusion layer combines both.
This solved our hardest classification problem: the "hard sit." When someone drops heavily onto a sofa, the velocity spike looks almost identical to a fall on the spectrogram. But the point cloud tells a different story — the final position of the body's centroid is at sofa height (roughly half a meter), not floor level. CNN-based approaches on spectrograms alone consistently outperform classical machine learning by 7-10% in accuracy, but adding the spatial stream pushed us past the threshold where the system became trustworthy enough for clinical deployment.
For the full technical breakdown of our architecture comparisons — CNNs, PointNet, LSTMs, and the newer RadMamba state-space models — see our research paper.
Why We Refused to Use the Cloud
Early in development, an advisor — someone I respect enormously — told me we were making a mistake by insisting on edge processing. "Just send the radar data to AWS," he said. "You can run whatever model you want. Inference will be faster, more accurate, and you won't have to deal with the nightmare of optimizing for microcontrollers."
He wasn't wrong about the engineering difficulty. Running a deep neural network on a Texas Instruments IWRL6432 — a system-on-chip with a C674x DSP and an ARM Cortex-M4 — is an exercise in extreme constraint. Standard neural networks use 32-bit floating-point math. We had to quantize everything down to 8-bit integers, which reduces model size by 4x. We pruned redundant connections. We used ARM's hand-optimized CMSIS-NN assembly kernels to squeeze every clock cycle out of the hardware.
It was months of work that a cloud deployment would have eliminated.
But he was wrong about the product.
The moment radar data leaves the room — even "anonymous" radar data — you've created a privacy liability. Behavioral patterns like bathroom frequency constitute protected health information under HIPAA. A data breach doesn't expose a photograph, but it exposes intimate details of someone's daily life. And from a practical standpoint, cloud processing introduces latency. When someone falls, every second of delay in alerting a caregiver matters. Network outages matter. Bandwidth costs for streaming high-frequency radar data from hundreds of rooms matter.
We process everything on the sensor itself. The neural network inference happens on the same chip that runs the radar. No images are ever created. No data leaves the device unless it's a structured alert: "Room 302: Fall Detected (High Confidence)." That alert goes to the nurse call system. Nothing else goes anywhere.
If your privacy architecture depends on a policy document instead of the laws of physics and the constraints of the hardware, you don't have a privacy architecture. You have a promise.
We also implemented a hierarchical wake-up system to manage power. A low-power presence detection chirp runs continuously. Only when coarse motion is detected does the full deep learning model activate. This cascade approach can extend battery life from days to months — critical for facilities where running new power lines to every room isn't feasible.
How Does a Radar Sensor Talk to a 1990s Nurse Call System?
This is the question that almost nobody in the AI world thinks about, and it's the question that determines whether your technology actually gets deployed.
The central nervous system of every care facility is the Nurse Call System, governed by UL 1069 — the standard for hospital signaling equipment. Most of these systems were installed decades ago. They speak in dry contacts and relay closures, not REST APIs.
I learned this the hard way. We had a beautiful MQTT-based integration working in our lab. Clean JSON payloads, real-time dashboards, the works. Then we walked into a 200-bed facility in the Midwest and saw their Rauland nurse call panel from the early 2000s. It had a row of auxiliary inputs that expected one thing: a circuit to close.
So we added an opto-isolated solid-state relay to our sensor. When a fall is detected, the relay closes. The nurse call light turns on. The pager goes off. It's brutally simple, and it's compatible with roughly 90% of existing infrastructure. No IT department involvement. No network configuration. Just two wires.
For newer facilities with IP-based nurse call platforms, we push structured data via MQTT or REST. The nurse doesn't just see "Room 302 Alarm" — she sees "Room 302: Fall Detected" or "Room 302: Resident has not moved for 4 hours." That second alert — the inactivity alert — turned out to be something facilities wanted even more than fall detection. It replaces the intrusive practice of nurses opening doors every few hours just to check if someone is still breathing.
What About the ROI Argument?
People always push back on the cost of deploying new sensor infrastructure. "Cameras are cheaper," they say. Or: "We already have pendant systems."
Here's the math I walk through with facility administrators. A single hospitalization-level fall costs $30,000 to $60,000. Evidence-based fall prevention programs have demonstrated ROI exceeding 500% — five dollars saved for every dollar invested. Our system pays for itself if it prevents one serious fall every five years per room.
But the ROI that matters most isn't on the balance sheet. It's in what the system enables beyond emergency detection. By tracking gait speed and activity levels over weeks, the radar can detect the subtle decline that precedes a fall. "Mrs. Jones is walking 20% slower this week" is a leading indicator that allows intervention before the accident. That's not fall detection. That's fall prevention. And the economic difference between the two is enormous.
The Shift That Changes Everything

I've been asked — more than once, usually by investors — whether cameras will just "get better at privacy." Blur the face. Mask the body. Process locally and delete.
Maybe. But you're still starting from a technology that captures identity by default and then trying to subtract it. You're asking the resident to trust that the subtraction works, that the software won't glitch, that the data won't be stored, that nobody will ever see the raw feed.
mmWave radar starts from the opposite position. It is physically incapable of capturing a face. There is no raw feed to leak. There is no "privacy mode" to accidentally disable. The resident doesn't need to trust our software. They can trust the electromagnetic spectrum.
That woman — my mother's neighbor, the one who chose the risk of the bathroom floor over the certainty of a camera — she represents millions of people who will face the same choice in the coming decade. The global population over 65 is growing faster than any other age group. The demand for monitoring will only intensify.
The question isn't whether we'll monitor the elderly. It's whether we'll do it in a way that lets them remain human while we keep them safe.
We built a system that detects a fall in the dark, through a shower curtain, without ever knowing what the person looks like. It runs on a chip smaller than a postage stamp. It talks to nurse call systems from the 1990s and cloud dashboards from 2025. It knows when someone is breathing and when they've stopped moving, and it does all of this without creating a single pixel of imagery.
I don't think the future of elder care is surveillance with better PR. I think it's sensing — invisible, ambient, dignified. The physics already supports it. The AI already works. The only question left is whether the industry has the imagination to stop reaching for the camera.
We did. And I haven't looked back.


