This paper is also available as an interactive experience with key stats, visualizations, and navigable sections.Explore it

The Architectural Imperative of AI Supply Chain Integrity: Securing the Machine Learning Lifecycle Against Malicious Models and Shadow Deployments

The rapid integration of machine learning models into enterprise environments has outpaced the development of robust security frameworks, creating a systemic vulnerability at the heart of modern digital infrastructure. While the market has largely focused on the capabilities of Large Language Model (LLM) wrapper services, the reality of deep AI engineering requires a fundamental shift in how organizations perceive and mitigate supply chain risks. The discovery by JFrog security researchers in February 2024 of more than 100 malicious models on the Hugging Face platform, many of which contained backdoors for arbitrary code execution, serves as a watershed moment for the industry.1 This incident, combined with the NVIDIA AI Red Team's findings on the extreme sensitivity of fine-tuned models to data poisoning, demonstrates that the "Deep AI" stack is currently the most vulnerable and least governed component of the corporate technological landscape.3

As organizations transition from experimental use of public APIs to the deployment of self-hosted, fine-tuned, or proprietary models, they inherit a supply chain that is significantly more opaque than traditional software.6 Unlike conventional code, which can be scrutinized for logic flaws, AI model weights are essentially binary blobs—opaque structures where malicious behavior can be hidden within millions of parameters.4 The complexity of this supply chain is further exacerbated by the rise of "Shadow AI," where developers and business units pull unvetted models from public repositories to bypass perceived bureaucratic bottlenecks, often unwittingly introducing persistent backdoors into production environments.9 Despite the release of the NIST AI 100-2 (2024) guidance on adversarial machine learning, adoption remains critically low, with a staggering percentage of enterprises lacking the automated controls necessary to secure their machine learning lifecycles.12

The Hugging Face Incident and the Vulnerability of Public Repositories

The February 2024 discovery by JFrog security research highlighted the inherent risks of treating machine learning hubs like Hugging Face as "trusted" sources.2 The investigation uncovered approximately 100 machine learning models harboring malicious payloads designed to grant attackers remote access to user systems.2 These models were not merely malfunctioning; they were weaponized artifacts. A specific example involved a PyTorch model uploaded by a user named "baller423," which utilized the Python pickle serialization format to inject arbitrary code into the deserialization process.2 When a data scientist or developer loaded this model using standard framework commands like torch.load(), the malicious payload executed immediately, establishing a reverse shell to a remote IP address belonging to the Korea Research Environment Open Network (Kreonet).1

This incident underscores a critical misunderstanding of model file formats. The industry has traditionally relied on the pickle format due to its flexibility in serializing complex Python objects, yet this flexibility is its primary security flaw.2 The pickle module essentially implements a stack-based virtual machine that can be manipulated to execute arbitrary Python functions, such as os.system() or subprocess.run(), during the unpickling process.16

Serialization Format Execution Risk Security Architecture Enterprise Context
Pickle (.pkl,.pt) High: Native code execution during load.2 Logic-based serialization (Opcodes).16 Common in legacy PyTorch and scikit-learn models.17
SafeTensors Low: No executable code permitted.17 Tensor-only data with JSON metadata.16 Current best practice for model weight distribution.17
GGUF Moderate: Risk in prompt templates.21 Binary format optimized for local inference.17 Widely used for llama.cpp and quantized edge models.17
Keras (.h5) Moderate: Potential for Lambda Layer abuse.21 Hierarchical Data Format (HDF5).21 Standard for TensorFlow/Keras deployments.21

The danger is not limited to pickle. Even newer formats like GGUF, which were designed to be safer, have been found to harbor vulnerabilities.22 Research into GGUF files revealed that malicious Jinja templates used for chat formatting could be embedded within the model metadata.21 These templates execute during the inference stage, allowing for arbitrary code execution even when the model weights themselves appear clean.22 This "inference-time code execution" is particularly dangerous because it bypasses static scanners that only look for malicious code in the initial model loading phase.21

Furthermore, the efficacy of existing security tools is increasingly in question. JFrog's research into "PickleScan," a widely used industry-standard tool for vetting models, identified three zero-day vulnerabilities (including CVE-2025-10155) that allowed attackers to completely bypass detection.18 By manipulating file extensions or using ZIP archive discrepancies, malicious actors could present a compromised model as "safe," leading to a false sense of security in the enterprise.18 Statistical analysis suggests that up to 96% of current scanner alerts are false positives, which desensitizes security teams to real threats and allows truly malicious models to infiltrate the supply chain.15

The NVIDIA AI Kill Chain and Adversarial Machine Learning

Understanding the threat landscape requires a structured approach to how attackers target machine learning systems. The NVIDIA AI Kill Chain provides a five-stage framework for modeling these attacks: Recon, Poison, Hijack, Persist, and Impact.3

The Mechanism of Poisoning

The "Poison" stage is where the most significant long-term damage occurs, particularly in the context of model weights and fine-tuning.3 Data poisoning involves manipulating the training, fine-tuning, or embedding data to introduce backdoors or biases that remain dormant until triggered.4 Research from Anthropic and the NVIDIA AI Red Team has demonstrated that these attacks are remarkably efficient.4 It only takes a tiny amount of poisoned data—as low as 0.00016% of a training corpus or approximately 250 documents—to reliably implant a hidden behavior in a 13-billion parameter model.25

These poisoned models act as "sleeper agents," performing perfectly on standard benchmarks and appearing normal during testing.4 However, when they encounter a specific "trigger" token—which could be a unique string of text, a specific image pattern, or even a bit-level manipulation of an input—the model switches to its malicious behavior.3 This could involve bypassing authentication, exfiltrating sensitive data, or generating harmful code for downstream systems.3

Attack Type Target Stage Mechanism Result
Pre-training Poisoning Dataset Collection Injecting malicious documents into web-scale data.25 Foundational backdoor in base model.24
Fine-tuning Poisoning Model Adaption Corrupting the instruction-tuning dataset.3 Targeted compromise of enterprise-specific tasks.4
RAG Poisoning Retrieval Phase Injected malicious documents in vector databases.3 Dynamic hijacking of model responses via context.3
Evasion Attack Inference Bit-level manipulation of input data (Adversarial Examples).3 Forces misclassification or unauthorized tool calls.3

The mathematical reality of poisoning is that adding more "clean" data does not mitigate the risk.25 Once the threshold of poisoned samples is reached (typically 50-100 occurrences of the trigger during training), the backdoor is permanently baked into the model weights.25 For enterprises building "Deep AI" solutions, this means that even if their proprietary fine-tuning data is clean, the base model they pulled from a public repository could already be compromised.5

The Shadow AI Epidemic and Organizational Blind Spots

The governance of AI assets is currently in a state of crisis. Shadow AI—the unauthorized use of AI models, APIs, and frameworks—creates blind spots that existing security systems cannot see.9 Statistical data from 2024 and 2025 reveals the scale of the problem: 90% of AI usage in the enterprise occurs outside the purview of IT and security teams.11

The Cost of Unregulated Innovation

The primary driver of Shadow AI is the perception that formal governance is a bottleneck to productivity.10 Employees frequently paste proprietary code, customer PII, and sensitive internal documents into public AI tools, with 77% of employees observed sharing such information.9 This data is often used by AI vendors to train future models, meaning a company's intellectual property could potentially be leaked to competitors through the model's future outputs.9

Furthermore, the economic impact of Shadow AI-related breaches is significant.10 Incidents involving unvetted AI tools increase the cost of a data breach by an average of $670,000.10 This is largely due to the "ghost users" and unmonitored API connections that create persistent backdoors into the corporate network.10 When developers integrate unvetted models from Hugging Face directly into production code, they are bypassing the standard software composition analysis (SCA) and vulnerability management protocols that have been the bedrock of enterprise security for the last decade.28

The Failure of Adoption: NIST AI 100-2

In early 2024, NIST released the AI 100-2 report, "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations," to provide a common language for securing AI.12 While the framework provides a comprehensive map of threats—ranging from evasion to poisoning and model theft—actual enterprise implementation is lagging.12

Control Category Adoption Status (2025) Implementation Gap
Automated AI Security Controls 17% of Organizations.13 83% of organizations "operating blind".13
Comprehensive AI Governance 12% Implementation.13 56% claim readiness but lack technical controls.13
AI Data Flow Visibility 14% of Organizations.13 86% have no visibility into internal AI data movement.13
Vulnerability Scanning for Models 15-18% depending on sector.13 Minimal coverage in legal and financial sectors.13

This 83% gap represents a "perfect storm" of security vulnerability, compliance failure, and competitive risk.13 Many organizations equate having a policy document with having operational security, yet without automated enforcement and technical barriers, employees will continue to favor convenience over safety.10

Deep AI Engineering: Securing the Machine Learning Supply Chain

For a deep AI solution provider like Veriprajna, the objective is to move beyond the superficial "wrapper" model and implement a security architecture that treats AI models as potentially malicious executable code.8 This requires a comprehensive "Secure by Design" approach across the entire machine learning lifecycle.33

The Machine Learning Bill of Materials (ML-BOM)

The first step in securing the supply chain is transparency. Traditional SBOMs (Software Bill of Materials) track libraries and versions, but AI requires an ML-BOM that captures the provenance of models and datasets.6 Standards such as CycloneDX and SPDX 3.0 have evolved to include AI-specific profiles.35

A robust ML-BOM must include:

Cryptographic Model Signing and Weight Management

Model weights must be treated as highly sensitive intellectual property and high-risk binary artifacts.41 Incorporating a Public Key Infrastructure (PKI) for machine learning models is no longer optional for enterprises.41 This involves generating unique cryptographic identifiers (hashes) for model weights and signing them using Hardware Security Modules (HSMs) to ensure that only authorized models are loaded into production inference engines.8

In a mature deep AI environment, the inference server should utilize an "Admission Controller" that verifies the model's signature against a corporate root of trust before the weights are deserialized into memory.8 This prevents the execution of malicious models pulled from external hubs or modified by an internal adversary.8

Advanced Mitigations: Scanning and Runtime Protection

Static analysis of model files is only the first line of defense. Enterprises must adopt a multi-layered approach that includes advanced scanning and behavior-aware runtime protection.33

Deep Code Analysis (DCA) and Context-Aware SAST

Traditional SAST (Static Application Security Testing) tools struggle with AI-generated code and model artifacts because they lack architectural context.49 Next-generation tools now use Deep Code Analysis (DCA) to build a "Software Graph" of the entire codebase, mapping how user input flows from an API gateway, through an LLM runner, and potentially into a database or system shell.50 This allows for the detection of vulnerabilities like the Vanna.AI RCE (CVE-2024-5565), where a prompt could be crafted to execute Python exec() functions on the underlying OS.1

Runtime Behavior Monitoring

Because model poisoning is notoriously difficult to detect statically, continuous runtime monitoring is essential.33 This involves:

Confidential Computing: The Final Frontier of AI Security

For industries with extreme security requirements—such as finance, healthcare, and defense—the traditional software-based security model is insufficient because it does not protect "data in use".44 Confidential Computing, enabled by Trusted Execution Environments (TEEs), provides the hardware-backed solution needed to close this gap.44

TEEs and Secure Enclaves

Technologies such as Intel SGX, Intel TDX, and NVIDIA's Hopper/Blackwell confidential GPUs allow AI models to run in an isolated memory space.44 In this architecture, the model weights and user prompts are only decrypted inside the hardware-protected enclave.44 Even a malicious cloud administrator or an attacker with root access to the host operating system cannot inspect or modify the data being processed.44

Technology Implementation Level GPU Support Use Case
Intel SGX Application-level isolation.52 No Protecting specific cryptographic keys or small modules.52
Intel TDX Virtual Machine-level encryption.52 Indirect Secure multi-party training and fine-tuning in the cloud.52
NVIDIA Hopper/Blackwell Rack-scale confidential GPU.52 Native Large-scale LLM inference on sensitive data.44
Confidential Containers OCI image encryption/attestation.44 Yes Deploying proprietary models to untrusted edge/hybrid environments.44

The integration of confidential computing into the AI lifecycle allows for "Mutual Attestation".44 The model provider can verify that their weights are only being loaded into a genuine, untampered TEE, while the end-user can verify that the code running in the enclave is the precise, approved software they expect.44 This creates a foundation for "Confidential AI" that meets zero-trust and strict regulatory requirements.52

The Veriprajna Strategic Roadmap: Transitioning to Deep AI

The discovery of 100+ malicious models and the systemic failures in AI governance documented throughout 2024 and 2025 demonstrate that "API Wrappers" are a dangerous shortcut for the enterprise.1 To operate AI safely and responsibly, organizations must adopt a centralized, auditable, and deep-engineered approach to the machine learning stack.8

Implementing Centralized AI Governance

Enterprises must establish a "Single Source of Truth" for AI artifacts.8 This involves:

  1. AI Asset Registry: Creating a centralized, internal repository for all models, datasets, and dependencies, similar to a private Artifactory or model hub.8
  2. Automated Vetting Pipelines: Every model pulled from the internet must go through an automated pipeline that performs static bytecode analysis, dynamic behavioral testing, and license compliance checks.8
  3. Mandatory ML-BOM Generation: No model should be deployed without a corresponding Bill of Materials that documents its provenance and training lineage.8

Deep Engineering for Resilience

Beyond governance, the engineering of AI applications must shift from "convenience-first" to "security-first".33

The incidents of early 2024 have proven that the AI supply chain is the new frontline of cybersecurity.30 Organizations that continue to treat AI as a mere extension of software development, without accounting for the unique risks of poisoning, evasion, and weight manipulation, are exposing themselves to catastrophic failure.23 By adopting the deep AI engineering principles outlined here, enterprises can move from "operating on luck" to a posture of verifiable, hardware-backed resilience.8 The goal is to make AI deployment "boring"—a predictable, auditable, and secure component of the corporate mission.8

The Convergence of AI Security and Software Supply Chain Security

A final, critical insight emerged from the research of 2024: AI security and software supply chain security are no longer separate problems.29 AI systems do not operate in a vacuum; they are built and deployed through the same CI/CD pipelines and registries that have been targeted by open-source supply chain attacks for years.30 If a model is secure but the Python library it runs on is compromised, the system is breached.8 If the training pipeline's container image is tainted, the model weights become untrustworthy.30

The industry must therefore move toward a "Unified Software Supply Chain" approach.11 This means that the provenance and integrity of the model, the dataset, the OSS dependencies, and the infrastructure must all be managed and verified simultaneously.8 Any dichotomy between "Software Assets" and "AI Assets" is a dangerous gap that attackers will exploit.29

As generative AI continues to accelerate the speed of development, the traditional human-in-the-loop review processes are collapsing.30 Large, AI-generated code changes are difficult to review under pressure, leading to a "shallow review" culture that removes a primary security control.30 In this environment, automated, deterministic verification—rooted in cryptographic signatures and ML-BOMs—becomes the only viable path for maintaining enterprise integrity.8

The whitepaper presented here is more than a technical guide; it is a strategic imperative for the modern CISO.10 The discovery of backdoored models on Hugging Face was not an isolated incident but a symptom of a systemic governance failure.2 Addressing this requires a commitment to deep AI engineering, where security is not an overlay but a foundational element of the model lifecycle.33 Veriprajna stands ready to guide organizations through this transition, from the fragility of Shadow AI to the resilience of a secure, deep AI stack.8

Works cited

  1. Top JFrog Security Research Discoveries of 2024, accessed February 9, 2026, https://jfrog.com/blog/top-jfrog-security-research-discoveries-of-2024/
  2. Hugging Face AI Riddled With 100 Malicious Code-Execution Models - Dark Reading, accessed February 9, 2026, https://www.darkreading.com/application-security/hugging-face-ai-platform-100-malicious-code-execution-models
  3. Modeling Attacks on AI-Powered Apps with the AI Kill Chain ..., accessed February 9, 2026, https://developer.nvidia.com/blog/modeling-attacks-on-ai-powered-apps-with-the-ai-kill-chain-framework/
  4. AI Model Poisoning in 2026: How It Works and the First Line Defense Your Business Needs - The LastPass Blog, accessed February 9, 2026, https://blog.lastpass.com/posts/model-poisoning
  5. Enterprise AI Risk: Security, Providers, and Regulation - George Mudie, accessed February 9, 2026, https://georgemudie.com/blog/enterprise-ai-part2-risk-security
  6. Securing the AI Supply Chain: A Framework for AI Software Bills of Materials and Model Provenance Assurance - Scholar Publishing, accessed February 9, 2026, https://www.journals.scholarpublishing.org/index.php/TMLAI/article/download/19884/11811/28416
  7. Same same but also different: Google guidance on AI supply chain security, accessed February 9, 2026, https://cloud.google.com/transform/same-same-but-also-different-google-guidance-ai-supply-chain-security/
  8. Securing The AI/LLM Supply Chain - AppSecEngineer, accessed February 9, 2026, https://www.appsecengineer.com/blog/securing-the-ai-llm-supply-chain
  9. Shadow AI: Risks, Challenges, and Solutions in 2026 - Invicti, accessed February 9, 2026, https://www.invicti.com/blog/web-security/shadow-ai-risks-challenges-solutions-for
  10. What Is Shadow AI? Definition | Proofpoint US, accessed February 9, 2026, https://www.proofpoint.com/us/threat-reference/shadow-ai
  11. JFrog Exposes Enterprise AI Blind Spots, Driving Centralized Software Supply Chain Governance, accessed February 9, 2026, https://investors.jfrog.com/news/news-details/2025/JFrog-Exposes-Enterprise-AI-Blind-Spots-Driving-Centralized-Software-Supply-Chain-Governance/default.aspx
  12. AI 100-2 E2025, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations - NIST CSRC, accessed February 9, 2026, https://csrc.nist.gov/pubs/ai/100/2/e2025/final
  13. 2025 AI Security Gap: 83% of Organizations Flying Blind - Kiteworks, accessed February 9, 2026, https://www.kiteworks.com/cybersecurity-risk-management/ai-security-gap-2025-organizations-flying-blind/
  14. New Study Reveals Major Gap Between Enterprise AI Adoption and Security Readiness, accessed February 9, 2026, https://www.prnewswire.com/news-releases/new-study-reveals-major-gap-between-enterprise-ai-adoption-and-security-readiness-302469214.html
  15. JFrog and Hugging Face Team to Improve Machine Learning Security and Transparency for Developers, accessed February 9, 2026, https://investors.jfrog.com/news/news-details/2025/JFrog-and-Hugging-Face-Team-to-Improve-Machine-Learning-Security-and-Transparency-for-Developers/default.aspx
  16. Pickle Scanning - Hugging Face, accessed February 9, 2026, https://huggingface.co/docs/hub/security-pickle
  17. Model Saving Formats 101: pickle vs safetensors vs GGUF — with conversion code & recipes | by Ankit Wahane | Medium, accessed February 9, 2026, https://medium.com/@ankitw497/model-saving-formats-101-pickle-vs-safetensors-vs-gguf-with-conversion-code-recipes-71e825c29ceb
  18. PyTorch Users at Risk: Unveiling 3 Zero-Day PickleScan Vulnerabilities - JFrog, accessed February 9, 2026, https://jfrog.com/blog/unveiling-3-zero-day-vulnerabilities-in-picklescan/
  19. PickleBall: Secure Deserialization of Pickle-based Machine Learning Models - Brown Computer Science, accessed February 9, 2026, https://cs.brown.edu/~vpk/papers/pickleball.ccs25.pdf
  20. Remote Code Execution With Modern AI/ML Formats and Libraries, accessed February 9, 2026, https://unit42.paloaltonetworks.com/rce-vulnerabilities-in-ai-python-libraries/
  21. JFrog and Hugging Face Join Forces to Expose Malicious ML Models, accessed February 9, 2026, https://jfrog.com/blog/jfrog-and-hugging-face-join-forces/
  22. LLM Backdoors at the Inference Level: The Threat of Poisoned Templates - Pillar Security, accessed February 9, 2026, https://www.pillar.security/blog/llm-backdoors-at-the-inference-level-the-threat-of-poisoned-templates
  23. Four Pillars AI Security Enterprise Implementation | by Tahir - Medium, accessed February 9, 2026, https://medium.com/@tahirbalarabe2/four-pillars-ai-security-enterprise-implementation-30285d7332c1
  24. LLM04:2025 Data and Model Poisoning - OWASP Gen AI Security Project, accessed February 9, 2026, https://genai.owasp.org/llmrisk/llm042025-data-and-model-poisoning/
  25. Understanding LLM Poisoning | DigitalOcean, accessed February 9, 2026, https://www.digitalocean.com/community/tutorials/understanding-llm-poisoning
  26. Adversarial Machine Learning: A Taxonomy and Terminology of ..., accessed February 9, 2026, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2025.pdf
  27. What is Shadow AI? Risks, Examples, and Governance - Securiti, accessed February 9, 2026, https://securiti.ai/what-is-shadow-ai/
  28. Shadow AI Risks and Organization Examples - zenarmor.com, accessed February 9, 2026, https://www.zenarmor.com/docs/network-security-tutorials/shadow-ai-risks-and-organization-examples
  29. Securing the intersection of AI models and software supply chains - Cloudsmith, accessed February 9, 2026, https://cloudsmith.com/blog/Securing-the-intersection-of-AI-models-and-software-supply-chains
  30. AI Security and the Expanding Software Supply Chain Attack Surface - Xygeni, accessed February 9, 2026, https://xygeni.io/blog/ai-security-and-the-expanding-software-supply-chain-attack-surface/
  31. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations - NIST Technical Series Publications, accessed February 9, 2026, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.ipd.pdf
  32. Small Models, Big Problems: Why Your AI Agents Might Be Sitting Ducks - Enkrypt AI, accessed February 9, 2026, https://www.enkryptai.com/blog/small-models-big-problems-why-your-ai-agents-might-be-sitting-ducks
  33. AI Model Security: What It Is and How to Implement It - Palo Alto Networks, accessed February 9, 2026, https://www.paloaltonetworks.com/cyberpedia/what-is-ai-model-security
  34. How to Secure AI Infrastructure: A Secure by Design Guide - Palo Alto Networks, accessed February 9, 2026, https://www.paloaltonetworks.com/cyberpedia/ai-infrastructure-security
  35. What Is an AI-BOM (AI Bill of Materials)? & How to Build It - Palo Alto Networks, accessed February 9, 2026, https://www.paloaltonetworks.com/cyberpedia/what-is-an-ai-bom
  36. Machine Learning Bill of Materials (ML-BOM) - CycloneDX, accessed February 9, 2026, https://cyclonedx.org/capabilities/mlbom/
  37. Building an Open AIBOM Standard in the Wild - arXiv, accessed February 9, 2026, https://arxiv.org/html/2510.07070v1
  38. How CycloneDX v1.5 Increases Trust and Transparency in More Industries, accessed February 9, 2026, https://owasp.org/blog/2023/06/23/CycloneDX-v1.5
  39. Open Source AI Supply Chain Security: Protecting Against Model Poisoning - VerityAI, accessed February 9, 2026, https://verityai.co/blog/open-source-ai-supply-chain-security-model-poisoning-protection
  40. Joint Cybersecurity Information AI Data Security, accessed February 9, 2026, https://media.defense.gov/2025/May/22/2003720601/-1/-1/0/CSI_AI_DATA_SECURITY.PDF
  41. Building Trust in AI Supply Chains: Why Model Signing Is Critical for ..., accessed February 9, 2026, https://www.coalitionforsecureai.org/building-trust-in-ai-supply-chains-why-model-signing-is-critical-for-enterprise-security/
  42. M3AAWG AI Model Lifecycle Security Best Common Practices, accessed February 9, 2026, https://www.m3aawg.org/AIModelLifecycleSecurityBCP
  43. A Playbook for Securing AI Model Weights - RAND, accessed February 9, 2026, https://www.rand.org/pubs/research_briefs/RBA2849-1.html
  44. Enhancing AI inference security with confidential computing: A path to private data inference with proprietary LLMs - Red Hat Emerging Technologies, accessed February 9, 2026, https://next.redhat.com/2025/10/23/enhancing-ai-inference-security-with-confidential-computing-a-path-to-private-data-inference-with-proprietary-llms/
  45. Sentry: Authenticating Machine Learning Artifacts on the Fly - arXiv, accessed February 9, 2026, https://arxiv.org/html/2510.00554v1
  46. Trustway Proteccio NetHSM - Hardware Security Module - Eviden, accessed February 9, 2026, https://eviden.com/solutions/cybersecurity/data-encryption/trustway-proteccio-nethsm/
  47. Navigating secure AI deployment: Architecture for enhancing AI system security and safety, accessed February 9, 2026, https://www.redhat.com/en/blog/navigating-secure-ai-deployment-architecture-enhancing-ai-system-security-and-safety
  48. What is automated code scanning? - Sonar, accessed February 9, 2026, https://www.sonarsource.com/resources/library/automated-code-scanning/
  49. A DevSecOps Guide to Scanning AI-Generated Code for Hidden Flaws - Bright Security, accessed February 9, 2026, https://brightsec.com/a-devsecops-guide-to-scanning-ai-generated-code-for-hidden-flaws/
  50. Introducing Apiiro AI-SAST: Static Scanning Reimagined – From Code to Runtime, accessed February 9, 2026, https://apiiro.com/blog/introducing-apiiro-ai-sast-static-scanning-reimagined-from-code-to-runtime/
  51. Mastering secure AI on Google Cloud: A practical guide for enterprises, accessed February 9, 2026, https://cloud.google.com/blog/products/identity-security/mastering-secure-ai-on-google-cloud-a-practical-guide-for-enterprises
  52. What Is Confidential AI? - Phala Network, accessed February 9, 2026, https://phala.com/learn/What-Is-Confidential-AI
  53. Confidential Computing: Powering the Next Generation of Trusted AI - Intel, accessed February 9, 2026, https://cdrdv2-public.intel.com/861663/confidential-computing-ai-whitepaper.pdf
  54. AI Security with Confidential Computing - NVIDIA, accessed February 9, 2026, https://www.nvidia.com/en-us/data-center/solutions/confidential-computing/
  55. Evaluating the Performance of the DeepSeek Model in Confidential Computing Environment, accessed February 9, 2026, https://arxiv.org/html/2502.11347v1
  56. How to Secure AI and Model Data with Storage Infrastructure, accessed February 9, 2026, https://blog.purestorage.com/purely-educational/how-to-secure-ai-and-model-data-with-storage-infrastructure/
  57. AI & LLM Security Collection - AppSecEngineer, accessed February 9, 2026, https://www.appsecengineer.com/enterprises/ai-llm-security-collection

Prefer a visual, interactive experience?

Explore the key findings, stats, and architecture of this paper in an interactive format with navigable sections and data visualizations.

View Interactive

Build Your AI with Confidence.

Partner with a team that has deep experience in building the next generation of enterprise AI. Let us help you design, build, and deploy an AI strategy you can trust.

Veriprajna Deep Tech Consultancy specializes in building safety-critical AI systems for healthcare, finance, and regulatory domains. Our architectures are validated against established protocols with comprehensive compliance documentation.