Flood Risk Intelligence
More than two-thirds of US flood damage occurs outside FEMA's high-risk zones. If your rating engine still anchors to Zone AE vs. Zone X, you're mispricing risk on both sides: overcharging the elevated house inside the zone, undercharging the slab-on-grade house outside it. The carriers that moved to property-level AI scoring are already cream-skimming your best risks.
We build the flood risk intelligence layer that connects vendor scores, satellite monitoring, and your claims data into a unified rating factor your DOI examiner can approve.
68.3%
Flood damage outside FEMA high-risk zones
NC State / First Street Research
106.1%
Projected homeowners combined ratio, 2025
III / AM Best
20% CAGR
Private flood policy growth, 2020-2024
Resources for the Future
Consider a concrete scenario that plays out thousands of times across your book every year.
A single-family home in Harris County, Texas. FEMA Zone X (minimal flood hazard). Built 2004 on a slab foundation with no elevation above grade. The lot is 85% impervious surface (concrete driveway, patio, detached garage). The nearest storm drain is 400 feet away and part of a 30-year-old municipal system designed for a 10-year rainfall event.
Zone X. No flood insurance mandate. If the homeowner buys a voluntary flood policy, it's priced off the NFIP Risk Rating 2.0 factors, which don't account for the impervious surface ratio, the undersized drainage infrastructure, or the fact that the house has zero first floor elevation above grade. Your system quotes a premium of $450/year.
On a book of 50,000 homeowners policies in Southeast Texas, this pattern of mispriced Zone X properties typically accounts for $2.8M-$4.2M in annual leakage. That's 30-40 properties generating $70K-$120K in flood claims per event against $450 annual premiums.
This isn't a hypothetical. Harris County has 1.2 million properties in Zone X. After Hurricane Harvey, 70% of flood claims came from outside FEMA high-risk zones. The carriers that identified these properties before the event reduced their cat loss ratio by 8-12 points that year.
Every vendor below solves part of the problem. None solves it end-to-end. The real challenge is building the integration and regulatory documentation that turns point solutions into an approved rating plan.
| Provider | What They Do | Strength | Gap |
|---|---|---|---|
| ZestyAI | CV-based property intelligence. Z-FLOOD, Z-FIRE, Z-WIND scores from aerial imagery and building permits. | Production-proven at scale. 6+ carrier partnerships signed in Q1 2026 alone. Multi-peril coverage. | No pluvial drainage modeling. Opaque model internals make DOI filings harder in states like CO and NY. Static scores, no event-triggered monitoring. |
| ICEYE | SAR satellite constellation for real-time flood monitoring. 30+ satellites, sub-24hr revisit. | Only provider with proprietary satellite data. Munich Re and AXA partnerships (2026). 6-hour flood extent updates during events. | Observation only, not predictive. +/-15cm urban depth uncertainty (double-bounce). Requires custom pipeline to turn raw SAR into claims workflow. |
| First Street | Flood Factor scores (1-10) for every US property. 30-year cumulative risk. Free consumer data, institutional API. | Most comprehensive US flood risk database. Strong public awareness. Includes fluvial, coastal, and pluvial hazards. | Hazard-only model. Doesn't assess structural vulnerability (FFE, building materials). Not currently accepted as a regulatory rating factor. |
| Fathom (Swiss Re) | Global flood hazard data. Swiss Re integrating into internal cat model (Jan 2026). 50,000-year probabilistic event sets. | Physics-based modeling. Best forward-looking climate scenarios. Swiss Re backing lends credibility with reinsurers. | Owned by Swiss Re, creating potential conflict for carriers with other reinsurance relationships. Hazard layer only, no property-level vulnerability. |
| Verisk / AIR | Incumbent cat modeling. Flood Score 3.0 for property-level US flood assessment. XactGen for AI claims estimating. | Deepest carrier relationships. Regulatory familiarity. Accepted as standard by most DOIs. | Legacy architecture being retrofitted with AI. Slower innovation cycle. Bundled pricing makes it expensive to use only flood components. |
| RMS / Moody's | Cat modeling platform. Acquiring Cape Analytics for AI-powered geospatial property intelligence. | Deep insurer integration. Cape acquisition adds CV-based property assessment. | Cape Analytics acquisition still in progress. Integration timeline unclear. Cape is stronger on wind/wildfire than flood. |
| Neptune Flood | MGA with proprietary Triton underwriting engine. API-first. Palomar partnership for nationwide private flood. | Fastest private flood quote-bind flow. ChatGPT integration for digital distribution. Pure-play flood expertise. | Competitor, not a tool you can license. Their technology stack is proprietary and not available to other carriers. |
| Big 4 / Large SIs | Deloitte, Accenture, EY, PwC offer insurtech advisory and implementation services. | Brand recognition. Large teams. Existing relationships with carrier C-suite. | They implement platforms, not build custom flood intelligence. An Accenture engagement starts at $2M+ and delivers a vendor selection exercise, not a working scoring engine. No proprietary flood domain expertise. |
The vendor landscape is fragmented by design. ZestyAI sells property scores. ICEYE sells satellite data. Fathom sells hazard layers. Verisk sells cat models. No single vendor has an incentive to build the integration layer that combines competing data sources, because that layer commoditizes their individual product. That integration layer, plus the regulatory documentation to get it approved as a rating plan, is what we build.
Each capability addresses a specific gap in the vendor landscape. We work with the scores and data you already buy, not against them.
We fuse ZestyAI property intelligence, ICEYE SAR monitoring, Fathom/First Street hazard layers, and your claims history into a composite property-level score. The fusion logic weights each source based on geography and peril mix. A coastal Florida property leans heavily on storm surge models and SAR monitoring. An inland Texas property weights pluvial drainage modeling and impervious surface ratios higher.
Output: a single rating factor per property, cached in Guidewire Integration Data Manager or Duck Creek's External Data Call framework, available in under 50ms for inline quote-bind.
Filing an AI-augmented rating algorithm requires actuarial memoranda showing loss ratio lift by decile, feature importance rankings, out-of-sample backtesting against historical events, and disparate impact analysis at census-tract granularity. We produce the complete filing package for each state.
We've mapped requirements across all 50 states. Colorado requires per-variable justification. New York's DFS Circular 2024-7 demands proxy discrimination testing. California requires prior approval with full model documentation. The filing package we produce is tailored to each state's specific requirements, not a one-size-fits-all template.
When a flood event triggers, we activate the pipeline that turns raw ICEYE SAR data into operational claims intelligence. Within hours of the first satellite pass: your TIV at risk is calculated by coverage type, adjusters are routed to SAR-confirmed wet properties only, severity is estimated by combining SAR flood depth with CV-derived first floor elevation, and claims from SAR-confirmed dry locations are flagged for SIU.
The pipeline runs as a managed service during events. Between events, you pay only for the monitoring layer that watches for satellite tasking triggers. Typical adjuster deployment efficiency improvement: 40-60% fewer wasted site visits.
This is the gap most vendors miss. Pluvial flooding (rainfall overwhelming drainage systems) causes the majority of insured flood losses, yet most commercial models focus on riverine and coastal surge. We build property-level pluvial models using LiDAR-derived digital elevation models at 1-meter resolution, CV-estimated impervious surface ratios per parcel, and municipal stormwater infrastructure data (pipe diameter, age, design capacity).
The model answers a specific question: for a given rainfall intensity, how deep does water get at this property's front door? The answer depends on micro-topography within 500 meters, not the FEMA zone.
With 24+ states adopting the NAIC AI Model Bulletin, independent fairness testing of AI-driven pricing is no longer optional. We run disparate impact analyses on your AI-augmented rates against census-tract demographics, identify which input features carry demographic signal (roof condition and impervious surface are the most common), and determine whether the predictive power is actuarially justified independent of the correlation.
The deliverable is the documentation package that satisfies the most demanding standard (New York DFS Circular 2024-7), which means it passes everywhere else. This applies whether you're using our scoring engine or third-party scores from ZestyAI, Cape Analytics, or any other vendor.
Four phases. Phase 1 is a standalone deliverable. If we don't find actionable leakage in your portfolio, you stop there.
We analyze your current book against property-level flood risk data. For each policy, we compare your charged premium against the AI-estimated expected loss. The output is a heat map of mispricing: which geographies, which construction types, which FEMA zones have the widest gap between what you're collecting and what you're paying out.
On a typical $200M written premium P&C book, this analysis reveals $2-5M in annual adverse selection leakage. That number, with property-level detail, is your business case for the remaining phases.
We build the multi-source scoring engine tuned to your specific book. This means selecting and weighting the data sources that matter for your geographies, training the pluvial micromodels for your key markets, and building the Guidewire or Duck Creek integration with the pre-scoring cache layer.
We validate the model against your historical claims. The test is simple: does the model's risk ranking predict which policies filed flood claims in the last 5 years better than your current rating plan?
We produce the DOI filing packages for your priority states. Each package includes the actuarial memorandum, the model validation report (backtesting against historical events, out-of-sample testing), the disparate impact analysis, and the explainability documentation showing how the model's rating factors relate to physical flood risk.
Filing timelines vary by state. "File and use" states (most of the Southeast) let you deploy immediately upon filing. "Prior approval" states (California, New York) require examiner review before deployment, which adds 60-120 days.
Go-live on the first renewal cycle with AI-augmented rates. We monitor loss ratio performance, premium adequacy, and policyholder retention. The first renewal cycle is critical: you'll see some policies non-renew as mispriced risks get correct pricing for the first time. The goal is that lost premium from departing high-risk policies is more than offset by reduced claims.
If you're also deploying the SAR claims triage pipeline, we activate it on a parallel track and run a tabletop exercise against a historical event in your portfolio before the next hurricane season.
Answer 8 questions about your current flood underwriting capabilities. Get a scored assessment with specific gaps and next steps for your situation.
The integration challenge is less about the API call and more about the caching and fallback architecture. A raw API call to an external scoring service takes 200-400ms, which eats most of your latency budget for an inline quote. We build a pre-scoring layer that batch-processes your in-force book nightly against the latest satellite imagery and property intelligence feeds, storing scores in Guidewire's Integration Data Manager. When a quote request comes in, the rating engine pulls the cached score in under 50ms.
For new submissions not yet in the cache, we use an asynchronous enrichment pattern: the quote proceeds with a preliminary score based on available FEMA zone and elevation data, then the full AI score back-populates within minutes. The referral queue catches any cases where the preliminary and full scores diverge significantly.
This pattern keeps your quote-bind flow under 500ms while ensuring every policy eventually gets the full multi-source risk assessment. For Duck Creek, the architecture is similar but uses their External Data Call framework instead of Integration Data Manager.
ZestyAI's Z-FLOOD score is strong for property-level structural vulnerability, particularly roof condition, building materials, and proximity to water. But it has specific blind spots that matter for flood. First, Z-FLOOD doesn't model municipal drainage capacity. Two properties with identical Z-FLOOD scores can have very different pluvial flood exposure depending on whether the storm drain network in their micro-watershed was designed for a 10-year or 100-year event.
Second, ZestyAI doesn't incorporate real-time SAR monitoring, so you get a static risk score but no event-triggered portfolio alerting. Third, and this is the filing problem: when you submit Z-FLOOD as a rating variable to a state DOI, the examiner asks for the underlying feature importance and loss ratio lift by decile. ZestyAI provides a model card, but in states like Colorado and New York, examiners want to see the analysis run on your specific book, not a generic industry-wide validation.
We build the wrapper that combines ZestyAI property intelligence with ICEYE SAR monitoring, pluvial drainage modeling, and your own claims history into a composite score. Then we produce the DOI filing documentation showing how each component contributes to predictive accuracy on your portfolio specifically.
The NAIC AI Model Bulletin, now adopted in 24+ states, requires insurers to demonstrate that AI-driven pricing doesn't produce unfairly discriminatory outcomes. For flood specifically, the risk is that CV-based property assessments correlate with neighborhood income. A property in a lower-income area might show deferred maintenance, lower roof condition scores, and more impervious surface, all of which legitimately predict flood loss severity but also proxy for protected characteristics.
The analysis starts with a geographic disparity test: we map your AI-augmented rates against census-tract demographics (race, income, age) and compare the rate distributions. If the AI model produces systematically higher rates in majority-minority tracts after controlling for actual flood hazard, that's a flag. Next, we run a feature attribution analysis using SHAP values to identify which input features drive the disparity. Often it's a single variable like roof condition score or impervious surface ratio that carries most of the demographic signal.
The fix isn't to remove the variable. It's to demonstrate that the variable's predictive power for flood loss is actuarially justified independent of its demographic correlation. We produce the documentation package that shows: here's the disparity, here's why it's actuarially justified, and here are the controls we implemented. New York's DFS Circular 2024-7 is the most demanding standard. If your documentation passes New York, it passes everywhere.
When a flood event triggers, ICEYE's constellation starts tasking satellites over the affected area. You get the first flood extent map within 12-24 hours of peak inundation, delivered as GIS-compatible shapefiles with 30-meter resolution. Updated extents arrive every 6 hours as additional satellite passes occur.
The triage pipeline we build does four things with this data. First, portfolio overlay: the SAR flood footprint is intersected with your policyholder geocoded addresses to calculate Total Insurable Value at risk, broken down by coverage type and policy limit. Your claims leadership gets this report before the first FNOL call comes in. Second, adjuster routing: field adjusters are dispatched only to SAR-confirmed wet properties, which typically cuts wasted site visits by 40-60%. Third, severity estimation: by combining SAR-derived flood depth at each property with the CV-estimated first floor elevation, we calculate estimated water intrusion depth, which directly maps to damage curves from FEMA's Hazus model.
Fourth, fraud flagging: any FNOL claim from a property that SAR data shows was dry during the event gets automatically routed to SIU. The urban double-bounce problem in SAR data means you get false negatives in dense urban areas, roughly 15% of properties. We handle this with an optical satellite cross-reference when cloud cover clears, typically 48-72 hours post-event. The system runs as a managed service during events and dormant between them, so you're not paying for idle infrastructure.
Most commercial flood models, including the vendor scores you can buy today, are fundamentally backward-looking. They train on historical loss data and satellite observations, which means they model the climate that was, not the climate that will be. For a 1-year policy, that's acceptable. For portfolio strategy, reserve adequacy, and reinsurance treaty negotiations, it's a real gap.
The technical answer is physics-informed neural networks. Instead of training purely on historical flood events, a PINN embeds the Saint-Venant equations (conservation of mass and momentum for fluid flow) directly into the loss function. This means the model can't predict water appearing without a source or flowing uphill. When you feed it a synthetic rainfall scenario that exceeds anything in the historical record, the physics constraints keep the output physically plausible.
Swiss Re's integration of Fathom data into 50,000-year probabilistic event sets is the industry moving in this direction. We build property-level surrogate models that approximate full hydrodynamic simulations in milliseconds. These aren't production-ready for real-time rating today. But they're essential for catastrophe scenario analysis, reserve adequacy testing, and reinsurance submissions where you need to demonstrate your portfolio's exposure to events that haven't happened yet. We use them alongside the vendor scores: ZestyAI for today's risk, physics-informed models for tomorrow's.
A typical engagement runs 16-24 weeks across four phases. Phase 1 (3-4 weeks) is the portfolio diagnostic: we analyze your current book, identify where your pricing deviates from property-level risk, and quantify the adverse selection exposure. This phase typically reveals $2-5M in annual leakage on a $200M written premium book, which funds the rest of the engagement.
Phase 2 (6-8 weeks) is model development: building the multi-source scoring engine, pluvial micromodels for your key geographies, and the Guidewire/Duck Creek integration. Phase 3 (4-6 weeks) is regulatory preparation: disparate impact analysis, actuarial memoranda, and DOI filing packages for your priority states. Phase 4 (3-6 weeks) is production deployment and the first renewal cycle with AI-augmented rates.
Budget depends on scope. A focused engagement covering one state and one peril (private flood in Florida, for example) runs $350K-$500K. A multi-state, multi-peril program covering flood, wind, and wildfire with full DOI filing support runs $800K-$1.5M. For MGAs, the numbers are typically lower because the book is smaller and you're filing in fewer states. We structure engagements so Phase 1 is a standalone deliverable. If the portfolio diagnostic doesn't find actionable leakage, you stop there.
The technical foundations behind this solution page.
Technical architecture for CV-based FFE extraction, SAR flood monitoring pipelines, and physics-informed neural networks for hydrodynamic simulation in insurance underwriting.
The portfolio diagnostic takes 3-4 weeks and pays for itself by identifying the mispriced Zone X properties hiding in your book.
The portfolio diagnostic takes 3-4 weeks. If we don't find actionable leakage, you stop there. If we do, the business case for property-level AI scoring writes itself.