Legacy Mainframe Modernization

Your COBOL Still Runs 95% of ATM Transactions. The Developers Who Wrote It Are Retiring.

70-80% of mainframe modernization projects fail. Not because the technology is wrong, but because the tools treat code as text instead of topology. We build the map of your codebase before touching a single line, so your migration succeeds where others have burned through millions and delivered nothing.

$1.52 Trillion

U.S. Technical Debt

Pragmatic Coders, 2025

10%/Year

COBOL Workforce Attrition

IEEE Spectrum, 2025

70-80%

Modernization Failure Rate

Industry Meta-Analysis, 2025

Why AI Code Translation Fails on Mainframes

The tools that promise "paste COBOL, get Java" produce code that compiles. That is the easy part. The hard part is the code they cannot see.

The REDEFINES Problem: A Real Migration Failure Pattern

Consider a wire transfer processing program. It contains a COMPUTE statement using a variable called TRN-LIMIT. An AI coding assistant translates the statement into a Java BigDecimal operation. The code compiles. Unit tests pass.

In UAT, the first transaction crashes the database consistency check.

The autopsy: TRN-LIMIT was not defined in the source file the AI translated. It was defined in a copybook included thousands of lines earlier in the execution chain. That copybook contained a REDEFINES clause, a COBOL construct that lets the same memory address be interpreted as two different data types depending on a flag set in a completely different module.

The AI saw TRN-LIMIT as a simple numeric field and assumed a standard integer type. On the mainframe, that memory address held a packed decimal (COMP-3). The Java application wrote corrupted binary data into the database column, triggering a referential integrity failure.

The code was syntactically perfect. The failure was contextual blindness. The AI missed a dependency that existed outside its field of vision.

HIDDEN DEPENDENCY

Copybook Chains

A single COBOL program may reference 40+ copybooks. Each copybook may COPY other copybooks. Variable definitions can be 5 levels deep in the inclusion chain. Text-based AI tools see none of this.

INVISIBLE LAYER

JCL Job Networks

Your COBOL does not run standalone. CA-7 or TWS schedules 2,000-5,000 JCL jobs with dependency chains. Job A writes a dataset that Job B reads at 2 AM. Migrate the COBOL but miss the JCL, and production breaks at midnight.

ARITHMETIC TRAP

Packed Decimal Math

COBOL's COMP-3 packed decimal has no Java equivalent. A Java double introduces floating-point rounding errors. Even BigDecimal requires explicit rounding mode configuration (HALF_EVEN) to match COBOL's ROUNDED clause. One wrong penny compounds across millions of transactions.

The Modernization Landscape in 2026

Every major technology vendor is now selling mainframe modernization. Here is what each actually delivers, and where the gaps remain.

Vendor / Approach What It Does Typical Cost What It Misses
IBM Watsonx Code Assistant for Z Agentic COBOL-to-Java translation with ADDI dependency analysis. Multi-agent architecture (Orchestrate, Architect, Code agents). Supports PL/I and IMS. $2M+ (enterprise licensing + z/OS requirement) ADDI runs on z/OS, creating vendor lock-in during migration. Parser struggles with pre-85 COBOL constructs (ALTER statements). No behavioral equivalence testing. No JCL dependency mapping.
Anthropic Claude Code AI-powered code analysis, documentation, dependency mapping. Strong at discovery and exploration phases. Supports incremental migration and API wrapping. Usage-based API pricing General-purpose AI. No built-in knowledge graph for transitive dependency resolution. Does not address JCL scheduling, behavioral equivalence testing, or regulatory audit trails.
Microsoft Azure Migration Factory Modular agentic agents via Semantic Kernel. COBOL Expert + Java Expert agents. Targets Java Quarkus. Azure Copilot migration agent in preview. Azure consumption + consulting Azure lock-in for target platform. Open-source agents are reference implementations, not production-ready for regulated environments. Limited CICS/IMS support.
DXC Technology Patented automatic code conversion (COBOL/RPG/JCL to Java). Decades of mainframe expertise. Hybrid cloud + mainframe-as-a-service models. $1M-$10M+ Proprietary tooling, limited transparency into conversion logic. Large enterprise focus. Engagement timelines of 18-36 months are common.
TCS / Infosys / Accenture Large system integrator practices with proprietary frameworks (MasterCraft, Cobalt). Massive delivery teams. End-to-end program management. $500K-$5M+ Platform-centric approach. They implement vendor tools, not build custom intelligence. Overhead of large SI engagement model. One SI led a $1B AUD bank migration that took 5 years and doubled its budget.
Micro Focus (OpenText) Visual COBOL Run COBOL natively on .NET/JVM. Pragmatic "strangler fig" starting point. COBOL compiler market leader. $200K-$500K licensing Not modernization, it is rehosting. COBOL logic stays COBOL. Technical debt persists. Does not solve the workforce problem.
DIY with open-source AI XMainframe LLM (7B/10.5B params, 30% better than DeepSeek on COBOL). Tree-sitter parsing. Custom pipelines. Engineering time + infrastructure Requires deep COBOL + graph engineering expertise. No production-grade COBOL parser covers all IBM Enterprise COBOL v6.x constructs. High risk of building parser gaps into the foundation.
Honest caveat: No tool, including ours, solves organizational buy-in, data quality problems, or the political challenge of convincing 200 developers to change how they work. Technology is necessary but not sufficient. If your organization lacks executive sponsorship for modernization, start there before engaging any vendor.

What We Build

Five capabilities, each addressing a specific gap in the modernization toolchain. We are vendor-neutral: the knowledge graph works regardless of whether your target is AWS, Azure, GCP, or on-premises Java.

Codebase Knowledge Graph

We ingest your COBOL source, copybooks, JCL libraries, DB2 catalog exports, CICS transaction definitions, and IMS segment hierarchies into a unified graph database. Every variable, every PERFORM chain, every REDEFINES clause, every batch dependency becomes an explicit graph edge. We reach for Neo4j when complex transitive closure queries dominate the use case, Memgraph when real-time traversal speed matters for interactive exploration.

The graph processes roughly 200K-300K lines per day during ingestion. For a 2M LOC codebase, expect 8-12 weeks from first ingest to validated, queryable graph. The output is a permanent asset: your codebase as searchable topology, not opaque text files.

Migration Risk Assessment and Extraction Sequencing

Before any code translation begins, we run graph analysis across four dimensions: coupling score (how many other modules depend on this one), REDEFINES/COMP-3 density (how many data type traps exist), dead code percentage (typically 20-30% of the codebase), and batch scheduling criticality (which JCL jobs touch this module and when).

The output is a ranked extraction sequence for strangler fig migration. Modules with the lowest coupling and simplest data types extract first. "God programs" called by 50+ other modules extract last. This sequencing is the difference between a controlled rollout and a cascade failure.

Graph-Augmented Code Translation

Our translation agents query the knowledge graph before generating each Java module, pulling the full transitive closure of dependencies. The agent sees the REDEFINES clause in the copybook three directories away. It sees the packed decimal definition that determines rounding behavior. It generates Java with explicit parameter passing (dependency injection) instead of COBOL's implicit global state. Then it compiles in a sandbox, runs behavioral equivalence tests, and self-corrects.

We use whichever foundation model fits the module's complexity. For straightforward PERFORM-to-method conversions, a smaller model works fine. For modules with nested REDEFINES, GOTO spaghetti requiring control flow flattening, or EXEC CICS embedded transaction logic, we bring in the most capable model available and augment it with the full graph context.

Behavioral Equivalence Test Harness

The part most vendors skip and most migrations fail on. We build a three-layer validation system: symbolic unit tests generated from graph-derived control flow paths, golden dataset replay using captured production transactions compared field-by-field with penny-perfect accuracy, and parallel production runs where both systems process live transactions for 30-90 days before the mainframe module is decommissioned.

Financial calculations require BigDecimal with HALF_EVEN rounding mode to match COBOL's ROUNDED clause. Date calculations require handling COBOL's 6-digit date format (YYMMDD) with century windowing logic. We build these conversion rules into the test harness, not into ad-hoc patches discovered during QA.

Batch Scheduling Migration

We parse your JCL job networks, CA-7/TWS/Control-M dependency chains, and batch processing sequences into the knowledge graph. Each JCL job becomes a node with edges to the COBOL programs it executes, the datasets it reads and writes, and the scheduling conditions it depends on (time triggers, dataset availability, predecessor completion).

When a COBOL module migrates to Java, we simultaneously build the equivalent scheduling in your target orchestration platform (Apache Airflow, AWS Step Functions, Azure Data Factory, or Control-M on distributed). The dependency chain is preserved and verified against the original CA-7/TWS definition. A typical mid-tier bank has 2,000-5,000 JCL jobs. We have seen them all.

How the Knowledge Graph Resolves a REDEFINES Chain

A step-by-step walkthrough of how graph-based dependency resolution prevents the most common migration failure pattern.

1

Parser Ingests Source and Copybooks

The parser processes PROG-WIRE-01.cbl, encounters COPY CB-ACCT-LIMITS, and follows the inclusion chain. It builds AST nodes for every variable declaration, including those in copybooks nested 3 levels deep.

* In CB-ACCT-LIMITS.cpy:
01 ACCT-LIMIT-RECORD.
05 TRN-LIMIT PIC S9(9)V99 COMP-3.
05 TRN-LIMIT-ALPHA REDEFINES TRN-LIMIT PIC X(6).
05 LIMIT-TYPE-FLAG PIC X.
2

Graph Creates Relationship Edges

The graph engine creates edges: PROG-WIRE-01 → IMPORTS → CB-ACCT-LIMITS. TRN-LIMIT → REDEFINES → TRN-LIMIT-ALPHA. LIMIT-TYPE-FLAG → CONTROLS_TYPE_OF → TRN-LIMIT. This captures the fact that the data type of TRN-LIMIT depends on a runtime flag in a different field.

3

Transitive Closure Reveals Full Impact

The graph traverses outward: which other programs also COPY CB-ACCT-LIMITS? Which programs set LIMIT-TYPE-FLAG? Which JCL jobs execute those programs, and in what sequence? The result is a complete impact chain. Changing how TRN-LIMIT is translated affects every program in this chain.

4

Translation Agent Gets Full Context

When the translation agent processes PROG-WIRE-01, GraphRAG retrieves not just the source file but the copybook definition, the REDEFINES relationship, the flag field, and all programs that set the flag. The agent generates a Java class with a type-safe union pattern: a TransactionLimit object that checks the flag before interpreting the underlying bytes as either a BigDecimal (packed decimal mode) or a String (alpha mode).

Without the graph: the AI assumes TRN-LIMIT is a simple numeric field, generates a long in Java, and the first wire transfer corrupts the database. With the graph: the AI sees the full dependency chain and generates type-safe code that handles both interpretations correctly. This is the difference between a migration that works in UAT and one that works in production.

How We Work

Four phases, each with clear deliverables. We do not quote a 3-year timeline and disappear. Each phase produces artifacts you own and can use independently.

PHASE 1 / 4-6 WEEKS

Assessment and Discovery

  • Source code export from z/OS (COBOL, JCL, copybooks, DB2 DDL)
  • COBOL dialect identification (IBM Enterprise v4/v5/v6, Micro Focus, Fujitsu)
  • Dead code scan (typical result: 20-30% of LOC unreachable)
  • MIPS consumption analysis by program
  • Preliminary extraction sequence with coupling scores

Deliverable: Assessment Report + preliminary knowledge graph prototype

PHASE 2 / 8-12 WEEKS

Knowledge Graph Construction

  • Full codebase ingestion with custom parser extensions for your dialect
  • Entity resolution across all copybooks, DB2 schemas, CICS definitions
  • JCL job network mapping with CA-7/TWS dependency chains
  • Transitive closure calculation with completeness validation
  • Interactive query interface ("What breaks if I change this variable?")

Deliverable: Queryable knowledge graph + ranked extraction sequence + impact analysis tool

PHASE 3 / ONGOING (STRANGLER FIG)

Incremental Migration

  • Module-by-module translation following the extraction sequence
  • Graph-augmented AI translation with full dependency context
  • Behavioral equivalence testing per module (golden dataset + parallel run)
  • Batch scheduling migration for each extracted module
  • MIPS reduction tracking (typical: 20-30% in Year 1)

Deliverable: Migrated Java modules in production + updated knowledge graph + scheduling equivalents

PHASE 4 / PER MODULE

Validation and Decommission

  • 30-90 day parallel production run per module
  • Differential output comparison with penny-perfect financial validation
  • Regulatory documentation (audit trail, change control, SOC 2 evidence)
  • Mainframe module decommission after sign-off
  • Knowledge graph update to reflect new architecture

Deliverable: Validated production deployment + regulatory documentation package + updated graph

Timeline caveat: These are typical ranges for a mid-tier institution (1-5M LOC). Larger codebases, multiple COBOL dialects, or heavy CICS usage extend Phase 2. We scope precisely after Phase 1 assessment.

Mainframe Modernization Readiness Assessment

Answer seven questions about your environment. The assessment identifies your readiness level and specific blockers to address before starting a migration engagement, with or without Veriprajna.

1. How many lines of COBOL are in active production?

2. Which COBOL dialect does your environment use?

3. Do you have up-to-date documentation of your batch job dependencies?

4. How many COBOL-skilled developers do you currently employ?

5. What regulatory frameworks apply to your mainframe systems?

6. Have you attempted a modernization project before?

7. Does the board or C-suite actively sponsor modernization?

Questions We Hear from CTOs and VP Engineering

How long does it take to build a knowledge graph for a 2-million-line COBOL codebase?

For a 2M LOC codebase with typical complexity (IBM Enterprise COBOL v6.x, DB2 embedded SQL, 500+ copybooks), graph construction takes 8 to 12 weeks. The first 3 weeks are parser configuration and validation. COBOL dialects vary enough that we need to verify the parser handles your specific use of REDEFINES, OCCURS DEPENDING ON, and EXEC CICS/SQL blocks before ingesting the full codebase.

Weeks 4 through 8 are automated ingestion, entity extraction, and relationship mapping. The parser processes roughly 200K-300K lines per day, but the bottleneck is entity resolution, specifically determining that ACCT-NUM in Program A and ACCT-NUM in Copybook CB-ACCT-01 are the same variable.

Weeks 9 through 12 are transitive closure calculation and validation. We run graph completeness checks: every PERFORM target must resolve to a paragraph, every COPY statement must resolve to a copybook, every DB2 table reference must map to a schema definition. Gaps get flagged for manual review. The output is a queryable knowledge graph where you can ask questions like "What happens if I change INTEREST-RATE in CB-GLOBAL-01?" and get a complete impact chain across every program that references it, directly or transitively.

Can we modernize incrementally instead of doing a full rewrite?

Yes, and we strongly recommend it. The strangler fig pattern is the only approach with a proven track record for mainframe migration. Full rewrites fail 70-80% of the time because they attempt to replace everything simultaneously, creating a single massive point of failure.

With the strangler fig approach, the knowledge graph identifies which modules have the lowest coupling scores, meaning fewest inbound dependencies from other modules. These are your extraction candidates. We typically start with batch reporting modules or standalone calculation routines that read from DB2 but do not update shared state. The new Java service runs alongside the mainframe. Production traffic gets routed to the new service for that specific function while the mainframe continues handling everything else. You validate behavioral equivalence on real production data before decommissioning the COBOL module.

Most organizations extract 15 to 20 modules in the first year, reducing MIPS consumption by 20-30% and generating enough cost savings to fund the next phase. The knowledge graph makes this safe because it shows you the blast radius of each extraction. If Module A is called by 47 other programs, that is not your first extraction candidate. If Module B is called by 2 programs and reads from 1 DB2 table, start there.

How do you handle JCL batch dependencies that most AI tools ignore?

This is the layer where most modernization projects hit unexpected failures 6 to 12 months in. Your COBOL programs do not run in isolation. They are orchestrated by JCL job streams managed by CA-7, TWS (Tivoli Workload Scheduler), or Control-M. A typical mid-tier bank has 2,000 to 5,000 JCL jobs with complex dependency chains: Job A must complete before Job B starts, Job C runs only on the last business day of the month, Job D triggers a CICS transaction that updates a VSAM file read by Job E.

We parse JCL alongside COBOL into the same knowledge graph. Each JCL job becomes a node with edges to the COBOL programs it executes, the datasets it reads and writes, and the scheduling conditions it depends on. When we migrate a COBOL module to Java, we simultaneously build the equivalent scheduling in your target platform, whether that is Apache Airflow, AWS Step Functions, or Azure Data Factory. The dependency chain is preserved and verified against the original.

We have seen projects where the code migration succeeded perfectly but production broke because nobody mapped the CA-7 job that ran a pre-processing step every night at 2 AM.

What makes your approach different from IBM Watsonx Code Assistant for Z?

IBM Watsonx Code Assistant for Z (currently v2.8.20, with Project Bob coming later in 2026) is a strong product with deep mainframe integration. It requires IBM ADDI (Application Discovery and Delivery Intelligence) to build its dependency analysis, and ADDI runs on z/OS. This means your dependency analysis tooling lives on the same mainframe you are trying to migrate away from. It also means IBM controls the analysis layer, which creates vendor lock-in during the most critical phase of migration.

Our knowledge graph runs off-mainframe. We ingest source code exports, JCL libraries, DB2 catalog exports, and copybook repositories. The graph lives in your cloud environment or on-premises infrastructure, independent of IBM licensing. Second, Watsonx focuses on COBOL-to-Java translation. We focus on understanding first, translation second. The knowledge graph is a permanent asset that serves impact analysis, documentation generation, and architectural governance long after migration is complete.

Third, ADDI's COBOL parser has documented limitations with pre-85 COBOL constructs, particularly ALTER statements and certain nested REDEFINES patterns. We build custom parser extensions for each client's dialect.

Finally, IBM's pricing targets large enterprises. Our engagement model works for mid-tier institutions where a $2M+ IBM engagement is not in the budget.

How do you prove the Java code behaves identically to the COBOL original?

Behavioral equivalence is where most AI-assisted migrations fall apart. Code that compiles and passes unit tests can still produce wrong results because of packed decimal rounding differences, EBCDIC-to-ASCII encoding mismatches, or REDEFINES memory overlay semantics that do not translate to Java objects.

We build a three-layer validation harness. Layer 1 is symbolic equivalence: we generate unit tests from the knowledge graph that cover every branch in the original COBOL control flow, including edge cases like negative amounts, zero-division guards, and leap-year date calculations. Layer 2 is golden dataset replay: we capture a representative set of production transactions from the mainframe (input records, DB2 reads, CICS interactions) and replay them through the new Java service. Outputs are compared field-by-field. For financial calculations, we verify penny-perfect accuracy using BigDecimal with HALF_EVEN rounding to match COBOL's ROUNDED clause behavior.

Layer 3 is parallel production run: both systems process the same live transactions simultaneously for 30 to 90 days. Discrepancies are logged, investigated, and fixed before the mainframe module is decommissioned. This is the longest phase, but it is also the phase that catches the edge cases accumulated over 30 years of production that no test suite can fully anticipate.

What does DORA mean for our mainframe systems, and does modernization help with compliance?

DORA (Digital Operational Resilience Act) has been fully in force since January 17, 2025, and it directly impacts any EU-regulated financial entity running mainframe systems. Article 11 requires ICT risk management frameworks that include regular resilience testing and threat-led penetration testing based on real-world attack scenarios. Most mainframe environments were not designed for this kind of testing. You cannot easily spin up a replica z/OS environment to run penetration tests without significant licensing and infrastructure costs.

DORA also requires detailed ICT asset inventories, incident reporting within specific timeframes, and third-party risk management for critical ICT service providers, which includes your mainframe vendor. Modernization helps in two ways. First, the knowledge graph itself serves as the ICT asset inventory that DORA requires. It maps every program, every data flow, every external dependency. Regulators can query it directly.

Second, migrated components running on cloud infrastructure are inherently easier to resilience-test. You can spin up test environments on demand, run chaos engineering scenarios, and validate recovery procedures without affecting production. We have seen institutions use the knowledge graph as evidence in regulatory examinations to demonstrate they understand their technology estate, even before migration is complete.

Technical Research

The methodology behind this solution page is grounded in our published research on legacy modernization and knowledge graph architectures.

The Architecture of Understanding: Beyond Syntax in Enterprise Legacy Modernization

How repository-aware knowledge graphs and GraphRAG overcome the "Lost in the Middle" syndrome that causes AI code translation to fail on enterprise COBOL systems.

Your Mainframe Costs $1,000-$2,000 Per MIPS Per Year. We Can Map Exactly Which MIPS to Eliminate First.

A 20-30% MIPS reduction in Year 1 typically saves $500K-$2M annually for a mid-tier institution.

The knowledge graph assessment takes 4-6 weeks. You get a complete dependency map of your codebase, a dead code report, and a ranked extraction sequence, whether you proceed with migration or not. The assessment itself is a permanent asset.

Codebase Assessment

  • ✓ Knowledge graph prototype of your COBOL estate
  • ✓ Dead code identification (typically 20-30% of LOC)
  • ✓ MIPS consumption analysis by program
  • ✓ Ranked module extraction sequence with coupling scores

Full Migration Engagement

  • ✓ Complete knowledge graph with JCL/DB2/CICS coverage
  • ✓ Incremental migration via strangler fig pattern
  • ✓ Behavioral equivalence testing per module
  • ✓ Regulatory documentation and audit trail