AI Biomechanics & Exercise Verification

Your PT platform can see movement.
It cannot tell if the movement is right.

Pose estimation is free. BlazePose, MoveNet, and MediaPipe are open-source and run on any phone. The hard problem is the layer above: exercise-specific biomechanical intelligence that knows a 70-year-old post-knee-replacement patient has different squat depth targets than a 30-year-old corporate athlete. We build that layer. Custom exercise verification engines for PT platforms and corporate wellness programs, from camera input to RTM-compliant compliance data.

35%

PT patients fully adhere to home exercises

Physiopedia / Sprypt, 2025

$3,591

Annual MSK burden per employee

UHC ($486 direct) + BioFunctional ($3,105 productivity)

96%

Employers offering virtual MSK care by 2027

Business Group on Health, 2025

Whether you are building a PT platform that needs exercise verification for RTM billing, or a corporate wellness program that needs fraud-resistant exercise tracking, the gap is the same: raw pose data in, clinically meaningful decisions out.

The intelligence gap between sensing and understanding

Every fitness AI company runs pose estimation. The question is what happens after the keypoints are extracted.

A specific example: knee valgus during a PT squat

A 62-year-old patient, 8 weeks post-ACL reconstruction, performs prescribed bodyweight squats at home. Their phone camera captures the movement. BlazePose extracts 33 keypoints per frame at 30 FPS. Here is what the raw data shows:

  • 1. Left knee X-coordinate shifts medially by 4.2cm relative to the hip-ankle line during the descent phase (frames 45-72).
  • 2. The knee flexion angle reaches 78 degrees at maximum depth (prescribed target: 90 degrees).
  • 3. The descent takes 1.1 seconds. The ascent takes 2.3 seconds.

A pose estimation library returns those numbers. It does not know that:

  • × The 4.2cm medial shift indicates knee valgus, which is a re-injury risk factor for post-ACL patients specifically.
  • 78 degrees falls short of the 90-degree target, but for week 8 post-ACL, this may be within acceptable progression if the patient was at 60 degrees two weeks ago.
  • The 2:1 ascent-to-descent ratio suggests compensatory movement. A controlled squat should be closer to 1:1. This patient may be using momentum on the way down and struggling on the way up.
  • × Combined, this pattern (valgus + insufficient depth + compensatory tempo) should trigger a clinician alert, not just a "good rep" count.

This interpretation layer is what we build. Pose estimation is the sensor. Exercise intelligence is the brain. The sensor is commoditized. The brain is not.

For PT platform operators

65% of patients abandon home exercise programs within the first month. Self-reported compliance is unreliable. Clinicians want to bill RTM codes (98975-98981) but need verified exercise data with timestamps, quality metrics, and protocol mapping to satisfy CMS documentation requirements.

The 2026 CMS Final Rule added CPT codes 98979 and 98985, lowering the RTM billing threshold from 16 days to as few as 2 days of monitoring and from 20 minutes to 10 minutes of management time. More patients are now billable. But the documentation still requires device-gathered data tied to treatment decisions.

For corporate wellness directors

Only 25% of employees actually use available wellness programs. Over 50% express reluctance to share health data. And after multiple Fitbit-shaking scandals, employers are demanding exercise verification that does not feel like surveillance.

The corporate wellness market is hitting $100B in 2026, but only 25% of employees actually use available programs. The trust problem runs deep: over half of employees resist sharing health data with their employer. Meanwhile, 36% of MSK surgeries are unnecessary, costing the workforce $90B (Employee Benefit News). Verified exercise data creates a different value proposition: early detection of declining movement quality that triggers clinical review before expensive interventions become necessary.

Who builds what in exercise verification

Pull this table up in your next vendor evaluation. Every entry reflects shipped capabilities as of Q1 2026, not roadmap promises.

Provider What They Ship Verification Method Where It Falls Short
Hinge Health Full-stack MSK platform. TrueMotion computer vision, Robin AI triage assistant. $732M projected 2026 revenue. Computer vision (Movement Analysis) + IMU wearable sensor Closed platform. Cannot embed in your product. Priced for enterprise employers, not PT clinic networks. Their verification tech is locked inside their care model.
Sword Health + Kaia Acquired Kaia ($285M, Jan 2026). Combines M-band wearable + Kaia's Motion Coach computer vision. Planning $500M round. Wearable sensor biofeedback + markerless computer vision (combined post-acquisition) Same lock-in as Hinge. Replacing Kaia's US MSK solution with Sword's platform, so Kaia customers are in transition. Hardware dependency (M-band) adds logistics friction for scaling.
Peloton IQ Form-tracking cameras on Cross Training series (launched Oct 2025). Rep counting, form corrections, wearable integration. Built-in AI camera on hardware Consumer fitness, not clinical. No RTM capability. Hardware-locked (only works on Peloton equipment). Not available as a platform or SDK.
Kemtai B2B computer vision platform. 44 body landmarks, skeleton overlay, real-time corrective guidance. Browser-based (WebGPU). Browser-based pose estimation with rule-based form correction General fitness focus, not clinically validated for PT. Browser-based means no NPU acceleration (higher latency). Rule engine is general-purpose, not configurable per patient per exercise.
QuickPose B2B iOS SDK for fitness apps. AI counters, timers, form check. Quick integration. iOS SDK with pose estimation + basic angle thresholds iOS only. Provides pose estimation with basic form feedback, not deep biomechanical analysis. No temporal modeling (rep quality, fatigue detection, trend analysis). No RTM documentation output.
Limber Health RTM billing specialist. Patent-pending risk stratification. 3.3x HEP session completion. 30%+ better outcomes (Athletico data). Self-reported exercise tracking + RTM billing workflow Strong on RTM billing workflow but exercise compliance is self-reported, not verified by computer vision. The billing infrastructure is excellent; the exercise verification is the gap.
MedBridge 3,500+ healthcare organizations. Exercise prescription, patient-facing therapy videos, RTM capabilities. Exercise video library + patient self-reporting + RTM Excellent content and clinical workflow. Exercise completion is video-based (patient watches and reports). No form verification, no quality scoring, no biomechanical analysis.
Big 4 / Large SIs Accenture, Deloitte, and similar firms advise on digital health strategy and platform selection. Strategy advisory, not technology build They recommend and integrate platforms. They do not build exercise intelligence engines. Engagements run $500K-$2M+ and produce recommendations, not deployed systems. For a PT platform that needs an SDK, not a strategy deck, they are the wrong tool.
Veriprajna Custom exercise intelligence layer. Edge SDK, RTM documentation pipeline, clinician-configurable thresholds. On-device pose estimation + TCN temporal analysis + biomechanical rule engine Not a care platform. Does not provide PTs, clinical workflows, or patient management. We build the verification engine; you build (or already have) the product around it. Monocular camera accuracy has real limits (see FAQ).

What we build

Five capabilities, each designed to solve a specific problem in the exercise verification pipeline. We build these as standalone modules or as an integrated system, depending on what your platform needs.

Clinical Exercise Intelligence Engine

The hard part. Biomechanical rule sets for 30+ PT exercises, each defining: target joint angles per exercise phase, acceptable ROM ranges, minimum amplitude for valid rep counting, smoothness criteria (Log Dimensionless Jerk), and bilateral symmetry baselines.

We calibrate thresholds with kinesiologists, not just ML engineers. A knee extension threshold for a post-surgical patient at week 4 is fundamentally different from week 12. The rule engine handles this as clinician-configurable parameters, not hardcoded values. For 30 core PT exercises, we target 85%+ agreement with expert PT assessment on quality scoring.

RTM-Compliant Verification Pipeline

From camera input to structured data that satisfies CMS documentation requirements for CPT codes 98975-98981 (plus new 2026 codes 98979 and 98985). The pipeline outputs timestamped session reports: verified rep counts, per-rep quality scores, ROM measurements mapped to the prescribed exercise protocol, and trend data across sessions.

Output format is FHIR-compatible JSON, designed for integration with EHR systems. The report ties directly to the patient's prescribed exercise plan, so the clinician sees "Patient completed 12/15 prescribed knee extensions, average quality score 7.2/10, ROM trend: 78 to 84 degrees over 2 weeks" rather than raw coordinate data.

Edge-First Motion Analysis SDK

Cross-platform SDK (iOS + Android) that runs entirely on-device. Pose estimation via BlazePose (33 keypoints, 3D) or MoveNet Lightning (17 keypoints, speed-optimized), with NPU acceleration through CoreML and NNAPI delegates. Inference at 15ms on NPU, total glass-to-glass latency under 50ms.

Video frames are discarded immediately after keypoint extraction. No pixel data leaves the device. This is not just a privacy feature; it is an architectural decision that eliminates BIPA/GDPR biometric data exposure, removes cloud inference cost (zero marginal cost per session), and enables offline operation for patients with unreliable connectivity.

Population-Adaptive Assessment

Exercise scoring that adapts to the user's clinical profile. A 70-year-old post-knee-replacement patient has different squat depth requirements than a 30-year-old corporate athlete in a wellness program. The system supports clinician-settable thresholds per patient per exercise, with sensible defaults based on age group, condition type, and recovery phase.

This includes camera setup intelligence. Different exercises require different camera angles: side view for squat depth assessment, front view for knee valgus detection. The SDK includes a setup wizard that gives real-time positioning feedback ("Move your phone 2 feet to the left") and confidence gating that pauses analysis when keypoint visibility drops below threshold rather than guessing angles from occluded joints.

Agentic Exercise Monitoring

The industry is moving from passive tracking to autonomous health agents. ARPA-H's ADVOCATE program is building clinical AI agents that autonomously adjust care plans. We build exercise monitoring agents that go beyond single-session scoring. The agent tracks patterns across sessions: declining ROM trends that suggest the patient is regressing, increasing asymmetry that indicates compensation patterns, fatigue-driven form degradation that correlates with time-of-day or days-since-last-session.

For PT platforms, this means proactive clinician alerts ("Patient X's knee flexion ROM has declined 8 degrees over the last 5 sessions, suggesting possible setback") instead of waiting for the next in-person visit. For corporate wellness, it means program-level trend analysis that identifies which exercise interventions are actually improving MSK outcomes and which are producing participation without progress.

From camera frame to clinical insight: the pipeline

A patient opens your PT app and starts a prescribed set of 15 bodyweight squats. Here is what happens in the 46 milliseconds between each camera frame and the feedback on screen.

1

Frame capture and keypoint extraction ~30ms

The device camera captures a frame. BlazePose (running on the NPU via CoreML or NNAPI delegate) extracts 33 skeletal keypoints with 3D coordinates (x, y, z) and per-keypoint confidence scores. Total inference: 10-15ms on NPU. The video frame is discarded. Only coordinates proceed.

2

Jitter smoothing via 1-Euro filter <1ms

Raw keypoints jitter frame-to-frame due to pixel quantization noise. A moving average would smooth the jitter but add 300ms+ of latency. We use the 1-Euro filter, which adapts its cutoff frequency based on velocity: aggressive smoothing when the patient holds a pose (eliminates visual jitter), minimal smoothing during fast movement (preserves responsiveness). The result: stable coordinates with near-zero added latency.

3

Confidence gating <1ms

If the hip keypoint confidence drops below 0.5 (arm occluding the hip, poor lighting, phone angle issue), analysis pauses and the patient sees "Adjust camera angle, hip not visible." We never guess joint angles from low-confidence keypoints. A false "Your knee is caving in" alert during a correct rep destroys trust immediately. A missed alert during actual valgus creates liability. The threshold is strict by design.

4

Temporal analysis via TCN ~2ms

The smoothed keypoint stream feeds into a Temporal Convolutional Network with causal dilated convolutions. Unlike LSTMs (which process frames sequentially and struggle with long sequences), TCNs use parallel convolutions with exponentially growing receptive fields. Layer 1 sees adjacent frames. Layer 10 sees 512 frames of history. This lets the model simultaneously analyze instantaneous form (is the knee valgus happening right now?) and long-term patterns (is rep quality degrading as the set progresses?). Recent research (MSA-TCN, IEEE 2025) achieves 98.7% HAR accuracy at 0.08MB model size and 1.8ms inference on mid-range smartphones.

5

Exercise-specific biomechanical analysis <1ms

The biomechanical rule engine applies exercise-specific logic. For this squat: Amplitude (did hip displacement cross the clinician-set depth threshold?), Smoothness (Log Dimensionless Jerk score, where high jerk indicates tremor or momentum cheating), Symmetry (asymmetry index comparing left/right leg signal energy), and Tempo (descent-to-ascent ratio as a compensatory movement indicator). Each metric maps to a per-rep quality score.

6

Real-time feedback + session report <1ms

The patient receives concurrent audio/haptic feedback ("Go deeper" or "Good rep"). At session end, the SDK produces a structured JSON report: 12/15 prescribed reps completed, average quality 7.4/10, knee flexion ROM 78-84 degrees (improving from last session's 72-80), one valgus flag on rep 9. This report maps directly to the prescribed protocol and feeds your RTM documentation pipeline.

Total glass-to-glass latency: ~46ms. For context, human visual reaction time is 150-250ms. The system detects and responds to form errors faster than the patient can perceive them, enabling true concurrent feedback rather than the "latent feedback" that cloud-based systems deliver 2-5 seconds after the movement has already happened.

How we work

A typical engagement runs 5-8 months from assessment to production deployment. The timeline depends on how many exercises you need verified and whether your platform already has pose estimation integrated.

Weeks 1-3

Platform Assessment

  • Audit your current tech stack: existing pose estimation, mobile frameworks, backend infrastructure, EHR integrations
  • Map your exercise library to biomechanical complexity tiers (periodic/simple, multi-phase, isometric)
  • Identify your RTM billing workflow requirements or corporate wellness reporting needs
  • Define the 10-15 highest-priority exercises for Phase 1

Deliverable: Technical requirements document + exercise priority matrix + architecture recommendation

Weeks 4-10

Intelligence Build

  • Build exercise rule engine with kinesiologist-calibrated thresholds for priority exercises
  • Train and optimize TCN model for your exercise set, quantize to INT8 for edge deployment
  • Integrate SDK with your mobile app (CoreML/NNAPI delegates, camera pipeline, UI hooks)
  • Build RTM documentation output format or wellness reporting dashboard integration

Deliverable: Working SDK integrated in your app + exercise rule library + documentation pipeline

Weeks 11-16

Clinical Validation

  • Test against 10+ licensed PTs or certified trainers assessing 50+ subjects across body types
  • Target: 85%+ agreement between system quality scoring and expert assessment
  • Iterate on thresholds for exercises that fall below target (this always happens with 2-3 exercises)
  • Document accuracy limitations honestly. Some exercises will have caveats noted in the system.

Deliverable: Validation report with per-exercise accuracy metrics + threshold adjustments + limitation documentation

Weeks 17-20+

Pilot and Scale

  • Deploy to a controlled pilot group (50-200 patients or employees) with monitoring dashboards
  • Collect real-world accuracy data across device types, lighting conditions, and user populations
  • Refine confidence thresholds and camera setup guidance based on pilot feedback
  • Scale to production with ongoing threshold refinement and exercise library expansion

Deliverable: Production deployment + pilot performance report + expansion roadmap for additional exercises

Honest caveat: Adding a new exercise to the library takes 1-2 weeks each. Exercises with clear periodic patterns (squats, calf raises, bicep curls) calibrate faster. Complex multi-phase movements (Turkish get-ups, Olympic lifts) or non-periodic exercises (yoga flows, isometric holds) take longer and may carry lower confidence scores. We scope this upfront so you know what you are getting.

Exercise verification readiness assessment

Answer six questions about your platform's current state. The assessment maps where you are on the exercise verification maturity curve and identifies the specific gaps to close.

1. Does your platform currently use any form of pose estimation or motion tracking?

2. How does your platform currently verify exercise completion?

3. Can clinicians or program managers configure exercise thresholds per user?

4. Does your exercise data output support RTM billing or structured wellness reporting?

5. Where does exercise analysis run?

6. How many exercises does your platform need to verify?

Questions PT platforms and wellness buyers actually ask

How do I add AI exercise form correction to my existing PT platform?

We build a mobile SDK that integrates with your existing iOS and Android apps. The SDK handles on-device pose estimation (MediaPipe BlazePose for 33-keypoint tracking or MoveNet Lightning for speed-critical scenarios), jitter smoothing via 1-Euro filtering, and exercise-specific form analysis. Your app calls the SDK when a patient starts an exercise session. The SDK returns structured data: rep counts, quality scores per rep, joint angle measurements, and session compliance summaries. Integration typically takes 3-4 weeks for the API connection, plus 2-3 weeks for UI work on your side to display feedback. The SDK runs entirely on-device using CoreML (iOS) or NNAPI (Android) delegates, so there is no per-inference cloud cost and no video data leaves the patient's phone. For PT-specific deployments, we include clinician-configurable thresholds: your therapists set target ROM, acceptable ranges, and quality criteria per patient per exercise through a web dashboard. The SDK enforces those thresholds during the session and flags deviations in the compliance report.

Can camera-based pose estimation actually meet clinical accuracy requirements for RTM billing?

Honestly, it depends on the exercise and the measurement. MediaPipe BlazePose shows Pearson correlation of 0.91 for upper limb movements and 0.80 for lower limb movements against Qualisys motion capture (the gold standard). For knee flexion specifically, monocular camera measurement has a mean absolute error of 9.3 to 21.9 degrees in 2D. That is not clinical grade for precise goniometric measurement. But RTM billing under CPT codes 98975-98981 does not require goniometric precision. CMS documentation requirements specify timestamped data from a monitoring device, patient interaction records, and treatment plan decisions based on monitoring data. What clinicians need for RTM is verified exercise completion (did the patient do the prescribed 15 reps of knee extensions?), approximate quality assessment (were the reps within a reasonable ROM range?), and trend data over time (is ROM improving week over week?). Camera-based systems deliver this reliably. Where we draw the line: we do not claim clinical-grade angle measurement from a single phone camera. For patients where precise ROM measurement matters (post-surgical recovery milestones, for example), we recommend supplementing with goniometer checks during in-person visits. The camera system handles the 28 days between visits when the patient is doing exercises at home unsupervised.

What about employee privacy concerns with camera-based exercise monitoring in corporate wellness?

Over 50% of employees express reluctance to share health information with their employer, and camera-based monitoring amplifies that reluctance. We address this with an edge-first architecture where no video ever leaves the device. The phone camera captures frames, the on-device model extracts skeletal keypoint coordinates (33 x,y,z values per frame), and the video frames are discarded immediately. Only aggregate session data reaches the employer's wellness platform: exercise type, rep count, quality score, session duration. No video. No keypoint streams. No movement patterns that could function as biometric identifiers. This matters legally too. Skeletal keypoint coordinate streams may constitute biometric data under BIPA (Illinois) and GDPR Article 9, since gait analysis has been demonstrated as a biometric identifier. By processing on-device and transmitting only aggregate metrics, we stay on the right side of biometric privacy law. The employee sees their own form feedback in real time on their screen. The employer sees a compliance dashboard showing participation rates and aggregate quality trends. The gap between those two views is the privacy boundary, and we enforce it architecturally, not just with policy.

How does this compare to Hinge Health or Sword Health for employer MSK programs?

Hinge Health (projecting $732M revenue in 2026) and Sword Health (which acquired Kaia Health for $285M in January 2026) are full-stack platforms: they provide the PT, the exercises, the monitoring, and the clinical support. If you want to buy an end-to-end MSK solution for your employees, those are strong options. Veriprajna is not competing with them on that. We build the exercise verification intelligence layer for organizations that need it embedded in their own platform. Three scenarios where this matters: First, if you are a PT platform or digital health company building your own MSK product, you need exercise verification technology but you do not want to white-label Hinge Health's competitor product. We build the SDK that powers your platform's exercise monitoring. Second, if you are a large employer (5,000+ employees) that already has an MSK vendor but wants independent exercise verification for your broader wellness program beyond MSK, including general fitness challenges, preventive exercise, and ergonomic compliance. Third, if you operate in a regulated context (insurance underwriting, workers' compensation claim validation) where you need the verification layer decoupled from any single care platform so it can be audited independently. We are the verification layer, not the care platform.

What exercises can the system verify and how long does it take to add new ones?

We start deployments with a core library of 30 PT exercises that cover the most common rehabilitation protocols: ROM exercises (shoulder flexion and abduction, knee flexion and extension, hip flexion, ankle dorsiflexion), strengthening (squat, lunge, bridge, calf raise, wall pushup, seated row, bicep curl), balance (single-leg stand, tandem stance), and functional movements (sit-to-stand, step-up, gait analysis). Each exercise has a biomechanical rule set defining valid form thresholds: target joint angles, acceptable ranges, minimum amplitude for rep counting, smoothness criteria, and symmetry baselines. Adding a new exercise takes 1-2 weeks. The process involves defining the biomechanical rule set with a kinesiologist (which joints to track, what angles define the exercise phases, what constitutes a quality rep), collecting calibration data from 20-30 subjects across body types, and validating against expert PT assessment with a target of 85%+ agreement on quality scoring. Exercises with clear periodic patterns (squats, bicep curls, calf raises) are straightforward. Complex multi-phase movements (Turkish get-ups, Olympic lifts) or non-periodic movements (yoga flows, isometric holds) require more calibration time and may have lower confidence scores. We are transparent about which exercises the system handles well and which it does not.

How does AI exercise verification actually reduce MSK costs for employers?

MSK disorders cost employers approximately $40.51 per member per month in direct healthcare costs (UnitedHealthcare), plus $3,105 per employee annually in productivity losses from MSK-related absenteeism. That is roughly $3,591 per employee per year in combined burden. The cost reduction mechanism is not the AI itself. It is what verified exercise data enables. First, early intervention: when the system detects declining ROM trends or increasing asymmetry in a participant's exercise data, it triggers a clinical review before the condition worsens into a surgical case. 36% of MSK surgeries are unnecessary (Employee Benefit News), and each avoided surgery saves $30,000-$50,000. Second, verified adherence drives better outcomes: PT patients using RTM-enabled exercise monitoring complete 3.3x more home exercise sessions than those on standard programs (Limber Health data), and Athletico Physical Therapy reports 30%+ better outcomes with RTM. Third, for corporate wellness programs specifically, verified exercise eliminates the fraud that has eroded employer trust. When incentives are tied to verified completion rather than self-reported activity, participation among genuine exercisers increases because the system is no longer rewarding people who shake their Fitbit. The realistic savings range is $800-$2,000 per engaged employee per year, depending on the population's MSK burden and the program's engagement rate.

Technical Research

The interactive whitepapers behind this solution page. These cover the technical foundations in depth.

Your patients are doing exercises at home right now. You do not know if they are doing them correctly.

65% abandon home exercise programs within the first month. Of those who continue, self-reported compliance overstates actual adherence.

Verified exercise data changes that equation. It gives clinicians real compliance data for treatment decisions, gives employers confidence that wellness dollars are producing outcomes, and gives patients real-time feedback that makes home exercise programs actually work. The technology to capture movement is free. The intelligence to interpret it is what we build.

Exercise Verification Assessment

  • ✓ Audit your platform's current motion analysis capabilities
  • ✓ Map your exercise library to biomechanical complexity tiers
  • ✓ Identify RTM billing or wellness reporting requirements
  • ✓ Recommend architecture (edge vs hybrid, SDK vs API)

Exercise Intelligence Build

  • ✓ Custom exercise rule engine with kinesiologist calibration
  • ✓ Edge SDK integration (iOS + Android, NPU-accelerated)
  • ✓ RTM documentation pipeline or wellness reporting integration
  • ✓ Clinical validation against expert PT assessment