
Your Wi-Fi Router Can Detect a Fall. Here's Why That Matters More Than Any Smartwatch.
My mother calls me every Sunday. A few months back, she mentioned — almost as an aside — that my grandmother had stopped wearing her medical alert pendant. "It makes her feel old," my mother said, with the particular exhaustion of someone who has had this argument many times.
My grandmother is 83. She lives alone. The pendant was supposed to be her safety net — press a button, get help. But it sits in a drawer now, next to a charging cable she can't quite manage and a quick-start guide nobody read. The most advanced personal emergency device on the market, and it's functionally a paperweight.
That conversation crystallized something I'd been circling for a while at Veriprajna. We'd been deep in research on Channel State Information — the complex data layer hidden inside every Wi-Fi signal — and I kept returning to the same uncomfortable question: what if the entire wearable health monitoring industry is solving the wrong problem?
Not a sensor problem. Not a battery problem. A human problem.
The device my grandmother refused to wear has excellent fall detection algorithms. It has a 48-hour battery. It's IP68 waterproof. And none of that matters, because it requires an 83-year-old woman with arthritis to actively cooperate with a piece of technology every single day. The research backs up what my grandmother demonstrated through sheer stubbornness: approximately 30% of users abandon their health trackers within six months. Among users of personal emergency response pendants specifically, only 14% achieve true 24-hour adherence.
The most effective health monitor isn't the one with the best sensors. It's the one that requires no interaction whatsoever. And it might already be sitting in your living room, blinking quietly next to the modem.
The Shower Paradox
Here's a statistic that should bother anyone in healthcare technology: the bathroom is the most dangerous room in the house for elderly people, and it's the room where wearables are most likely to be removed.
I started calling this "The Shower Paradox" when we were mapping failure modes for active monitoring systems. Despite modern smartwatches carrying IP67 or IP68 water resistance ratings, older adults routinely take them off before bathing. A lifetime of experience with electronics that couldn't survive a splash. Fear of damaging something expensive. The discomfort of a wet strap against fragile skin. The reasons are mundane and completely rational.
So the user is unmonitored during the exact window when a fall is most probable. Slippery tile, hard porcelain edges, steam reducing visibility — and the device is sitting on the vanity, perfectly charged, perfectly useless.
When I presented this problem to an investor early on, he shrugged and said, "So make a waterproof pendant they can't take off." I remember sitting in that meeting thinking: you want to handcuff an 83-year-old to a sensor. That's not a solution. That's a restraint.
The question isn't how to make people wear monitors. It's how to make monitoring invisible.
What If Your Walls Could Listen to Your Breathing?
Not with microphones. With radio waves.
Every Wi-Fi router in your home is constantly transmitting radio frequency signals that bounce off walls, furniture, and — crucially — people. These signals carry something called Channel State Information, or CSI. Unlike the crude signal strength indicator your phone shows (those familiar bars), CSI describes how the wireless signal propagates across dozens or hundreds of individual frequency subcarriers. It captures amplitude and phase for each one. It's essentially a high-resolution electromagnetic fingerprint of the physical environment.
When a person moves through that environment, they disturb the fingerprint. Walk across a room, and the Doppler shift in the reflected signal creates a distinct velocity pattern. Swing your arms while walking, and the CSI captures the complex interplay of limbs moving toward and away from the receiver.
But here's what genuinely stunned me the first time I saw the data: you don't have to walk. You just have to breathe.
At 5 GHz, a Wi-Fi signal has a wavelength of about 6 centimeters. The human chest wall displaces roughly 4 to 12 millimeters during normal respiration. That's a small fraction of the wavelength — but it's enough. As the chest expands and contracts, it shifts the reflected signal between constructive and destructive interference, creating a rhythmic oscillation in the CSI phase data. We can reconstruct the breathing waveform from this oscillation with accuracy comparable to medical-grade respiratory belts — experimental evaluations show respiratory rate estimation errors below 3.2 breaths per minute, and deep learning models achieve correlation coefficients exceeding 0.92 with reference chest straps.
I remember the night our team first extracted a clean breathing signal from a commodity Wi-Fi router. It was late — well past midnight — and one of my engineers had been lying on a couch in our test space for twenty minutes while we tuned the preprocessing pipeline. When the waveform appeared on screen, smooth and rhythmic, perfectly tracking his respiration, the room went quiet. Then someone said, "He's actually asleep." And we could see it. Not with a camera. Not with a chest strap. Through a wall, via radio waves, from a $30 router.
That was the moment I knew we weren't working on an incremental improvement. We were working on a different paradigm entirely.
Why Can't You Just Use GPT for This?

I get this question constantly. Usually from people who've spent the last two years watching large language models do increasingly impressive things and have reasonably concluded that "AI" means "throw it at a transformer trained on internet text."
CSI data is not text. It's not even close to text. It's continuous, complex-valued, high-dimensional, and governed by Maxwell's equations, not grammar. An LLM cannot "read" a 5 GHz waveform any more than it can taste a lemon. The architectures are fundamentally mismatched.
This is why I get frustrated when I see companies marketing "AI-powered" health monitoring that amounts to an API wrapper around a general-purpose model. At Veriprajna, we build bespoke deep neural networks designed specifically for temporal signal processing. The distinction matters — it's the difference between a system that works in a demo and one that works at 3 AM when someone's grandmother falls in the bathroom.
Our architecture uses three types of neural networks in concert, each handling a different aspect of the signal:
Convolutional Neural Networks treat the CSI data matrix — subcarriers plotted against time — as a kind of image. The CNN learns spatial correlations across frequencies, identifying the spectral "shape" of a fall versus the shape of a spinning ceiling fan. Long Short-Term Memory networks add temporal context. A fall isn't a single moment; it's a sequence — standing, losing balance, accelerating downward, impact, stillness. The LSTM remembers what came before, which is how we distinguish someone falling from someone flopping onto a couch. And Dual-Branch Transformers process amplitude and phase data simultaneously through separate pathways, fusing them with an attention mechanism that dynamically prioritizes whichever stream is most informative. During sleep, the model leans on phase data where the breathing signal lives. During activity, it shifts to amplitude.
I wrote about the full technical architecture — the preprocessing pipeline, the domain adaptation approach, the physics of Fresnel zones — in our detailed research paper. The short version: this is not a problem you can solve with a pre-trained model and a weekend hackathon. The signal processing alone requires phase unwrapping, Hampel filtering, and principal component analysis before a neural network ever sees the data.
An LLM cannot "read" a 5 GHz waveform. The most dangerous thing in health AI isn't bad algorithms — it's good marketing on shallow technology.
How Does Wi-Fi Sensing Actually Detect a Fall?

A fall has a kinematic signature that's surprisingly distinctive in the radio frequency domain. Different activities produce different Doppler patterns — the frequency shift that occurs when a signal bounces off a moving object.
Walking generates a complex, oscillating pattern as arms and legs swing toward and away from the receiver. Sitting down produces a brief, controlled downward velocity. But a fall? A fall shows a specific sequence: irregular motion (loss of balance), rapid acceleration toward the floor (gravity doing its work), a sharp energy spike (impact), and then — critically — near-total stillness.
That stillness is what matters most. We call it the "Long Lie," and it's often more dangerous than the fall itself. An elderly person lying on the floor for hours, unable to get up, faces rhabdomyolysis, dehydration, pressure injuries. The fall breaks the hip; the Long Lie can kill.
Our system doesn't just detect the fall event — it detects fall sensitivity exceeding 97% — it continues monitoring afterward. If the CSI shows an absence of gross motor movement but the continued presence of micro-motion (breathing) at floor level, the system confirms a "fall with inability to recover" and escalates. This post-fall context is something wearable accelerometers fundamentally cannot provide. A wearable can tell you it experienced a sudden deceleration. It can't tell you the person is now lying on the bathroom floor, breathing but not moving, for the past forty minutes.
There's another layer that excites me even more: pre-fall detection. By continuously monitoring gait — walking speed, stride consistency — over weeks, the system can identify the subtle mobility deterioration that typically precedes a fall. A gradual slowing of walking speed is a clinically validated predictor of fall risk. This means we can flag someone for preventative physical therapy before the accident, not just respond after it.
The Room That Sees Without Eyes
I had an argument with a colleague about privacy that lasted, on and off, for about three weeks.
His position: any system that monitors people in their homes is surveillance, full stop. My position: it depends entirely on what the system can see.
A camera in a bedroom records a person's body, their face, their intimate moments. If the feed is hacked, the damage is catastrophic and irreversible. CSI data — the raw material of Wi-Fi sensing — consists of complex numbers representing signal propagation characteristics. If you intercepted the data stream, you'd see matrices of amplitude and phase values. You would not see a face. You would not see a body. You couldn't reconstruct an image even if you tried. The system is visually blind by design.
Wi-Fi sensing doesn't watch people. It feels the disturbance they create in the electromagnetic field. The distinction isn't semantic — it's the difference between surveillance and awareness.
This matters enormously for the bathroom problem. Cameras are — rightly — prohibited in bathrooms and bedrooms in most care facilities. But Wi-Fi signals penetrate walls, doors, and shower curtains. They work through steam. They work in complete darkness. The most dangerous room in the house becomes monitorable without a single lens pointed at anyone.
For enterprise clients — nursing homes, assisted living facilities, hospital-at-home programs — the regulatory implications are significant. Under GDPR, CSI is classified as biometric data because it can theoretically identify individuals by gait pattern. Under HIPAA, health data derived from monitoring is Protected Health Information. We handle this through strict edge processing: raw CSI data is processed locally on the router or gateway, the high-bandwidth biometric signal never leaves the device, and only abstracted events are transmitted to the cloud. A JSON packet reading {"event": "Fall", "location": "Bathroom", "confidence": 0.98} contains no biometric data and can't be reverse-engineered to identify anyone's physiology.
I explore the full privacy architecture and compliance framework in the interactive version of our whitepaper.
What About Different Rooms and Different Homes?
This is the objection I take most seriously, because for years it was a legitimate killer of Wi-Fi sensing research.
A model trained on CSI data collected in Lab A would fail spectacularly when deployed in Apartment B. Different room dimensions, different furniture, different wall materials — the multipath environment changes everything. The model wasn't learning "what a fall looks like." It was learning "what a fall looks like in this specific room with this specific couch in this specific corner." Overfit to the reflections of one space.
My team spent a genuinely painful period discovering this firsthand. We had beautiful accuracy numbers from our test environment — north of 98% on fall detection — and then we moved the setup to a different floor of the same building and watched the numbers crater. I remember staring at the confusion matrix, thinking we'd wired something wrong. We hadn't. The model had simply memorized the room.
The solution came from an adversarial training approach called Domain Adversarial Neural Networks. The idea is elegant in principle and maddening to implement: you train the network with two competing objectives simultaneously. One head tries to correctly classify the activity — fall versus walk versus sitting. The other head tries to identify which environment the data came from. Then you force the feature extractor to confuse the environment classifier. The network is compelled to learn features that are invariant to the room — the "platonic ideal" of a fall signature that looks the same whether it happens in a studio apartment or a nursing home corridor.
When we finally got this working — after weeks of hyperparameter tuning and more than one late-night debate about gradient reversal layers — the cross-environment accuracy stabilized. Not perfect. But deployable. "Train once, deploy everywhere" went from aspiration to engineering reality.
The Zero-Hardware Retrofit
For the operators I talk to — the people running assisted living facilities, the insurance actuaries modeling fall risk, the hospital-at-home program directors — the pitch isn't really about the AI. It's about the economics.
These facilities already have enterprise Wi-Fi networks. They already have routers in hallways and access points in common areas. The sensing capability lives in the signals those devices are already transmitting. With the right chipset — Qualcomm's Networking Pro series with its onboard Hexagon NPU, Broadcom's Wi-Fi 7 and Wi-Fi 8 platforms with the BroadStream telemetry engine, or even $5 ESP32 microcontrollers deployed as dedicated sensing nodes — the upgrade is primarily software.
No wearables to purchase, lose, charge, or replace. No cameras to install, maintain, or defend in a privacy lawsuit. A firmware update enables fall detection across 100 rooms simultaneously.
The IEEE is formalizing this with 802.11bf, the WLAN Sensing standard expected for ratification in late 2024/2025. When it lands, every new Wi-Fi router will natively support CSI extraction and sensing requests. The router becomes a standardized radar. The infrastructure is already there. We just haven't been using it.
People sometimes ask me whether passive Wi-Fi sensing will fully replace wearables. I don't think so — not for active, mobile populations who benefit from heart rate monitoring during exercise or GPS tracking during outdoor activities. Wearables serve a real purpose for the young-old, the 65-to-75 demographic that's digitally literate and physically active. But for the 85-year-old with dementia who can't remember to charge a pendant? For the post-surgical patient recovering at home who needs continuous respiratory monitoring? For the facility operator trying to provide 24/7 safety coverage without a camera in every room? The answer isn't a better wearable. It's no wearable at all.
Others ask about pets — will a dog trigger false alarms? The Doppler signature of a 15-pound terrier and an 80-year-old human are dramatically different in both velocity profile and body cross-section. The neural network learns this distinction quickly. Cats are trickier, but the temporal context from the LSTM — the sequence of motion, not just a single frame — handles most edge cases.
The Air Is Already Full of Information
I think about my grandmother often when I'm working on this technology. She's not a use case or a persona in a pitch deck. She's a person who wants to live in her own home, with her own routines, without a plastic medallion around her neck broadcasting her frailty to every visitor.
The air in her apartment is already saturated with Wi-Fi signals. They pass through her walls, reflect off her furniture, ripple with every breath she takes. Right now, all of that information dissipates unused — electromagnetic noise, invisible and ignored.
We have the physics to read it. We have the AI to interpret it. We have the hardware already installed in millions of homes. The only thing standing between where we are and where we need to be is the willingness to stop thinking about health monitoring as something you strap onto a person and start thinking about it as something you weave into the space around them.
The future of health monitoring isn't about better gadgets. It's about making the building itself aware — and making that awareness invisible.
The era of asking vulnerable people to manage their own surveillance technology is ending. Not because the technology failed, but because the assumption behind it — that compliance is a user problem rather than a design flaw — was always wrong. The answer was never a better button to press. It was eliminating the need to press anything at all.


