AI & LLM Threat Modeling
Systematic identification of attack surfaces in your LLM integrations, agentic workflows, MCP/A2A protocols, and RAG pipelines before attackers find them.
Expert Security Engineer | AI Security | Security Architect | Threat Modeling
Principal Security Engineer with 10+ years driving secure-by-design architecture across global SaaS platforms and AI/LLM-powered features. Combining deep offensive expertise with architectural influence to shape how teams build secure systems at scale.
Organizations are racing to adopt AI, but security isn't keeping pace. I bridge that gap with hands-on, battle-tested expertise.
Systematic identification of attack surfaces in your LLM integrations, agentic workflows, MCP/A2A protocols, and RAG pipelines before attackers find them.
Adversarial testing of AI-powered applications using real-world attack techniques. From prompt injection chains to tool poisoning — finding what scanners miss.
Design secure-by-default AI pipelines with automated guardrails. SAST/SCA/IaC scanning in CI/CD, LLM-driven code review, and secrets elimination.
Navigate the EU AI Act (August 2026 deadline), NIST AI RMF, and ISO 42001. AI security frameworks, risk classification, and governance structures.
Everyone is deploying AI. Almost nobody is securing it properly.
AI agents are becoming autonomous decision-makers with access to critical APIs, databases, and infrastructure. A single compromised agent can cascade through your entire system within hours.
The EU AI Act high-risk compliance deadline hits August 2026. 77% of organizations have already experienced breaches in their AI systems. The gap between adoption ambition and security reality has never been wider.
Active researcher across elite platforms, contributing to the security of Google, Twitter, Mastercard, and more.
100+ pentest programs with 500+ vulnerabilities disclosed across major tech companies.
Threat modeling for LLM architectures, multi-agent frameworks, and generative AI ecosystems.
Deep manual testing for OWASP Top 10, business logic flaws, and access control issues.
Automated SAST/SCA/IaC in CI/CD. Removed 98% of hardcoded secrets from 520+ repos.
Continuous security validation in Kubernetes and containerized environments.
OpenVAS/Nessus plugin development, CVE reproduction, exploit writing, and automation.
Data-driven analysis of the AI security landscape.
A data-driven look at the widening chasm between enterprise AI ambition and the security infrastructure needed to support it. Why 83% of organizations plan agentic AI but only 29% are security-ready.
There's a pattern I've seen repeatedly over the past year while working on security architecture for AI-driven systems: engineering teams are shipping LLM-powered features at unprecedented speed, while security teams are still figuring out what questions to ask.
Industry research confirms what I observe on the ground. 90% of organizations are either actively implementing or planning LLM use cases. Generative AI usage across enterprises has nearly doubled in under a year, with 65% of organizations now regularly using it in production. And the next wave is even bigger: 83% of organizations planned to deploy agentic AI capabilities into core business functions.
While adoption numbers climb into the 80-90% range, security tells a completely different story. Only 5% of organizations feel highly confident in their AI security posture. For every 100 companies deploying LLMs, only 5 believe they can actually defend them.
Among organizations planning agentic AI, only 29% felt genuinely ready to do so securely. That means 71% are deploying autonomous AI agents — systems that can read databases, call APIs, and make decisions — without confidence they can prevent those agents from being exploited.
This isn't theoretical. Analysis of real-world attacks in late 2025 showed adversaries adapting techniques specifically for AI agent environments. The most common objective: system prompt extraction — pulling out the hidden instructions that define how an agent behaves, which tools it can access, and where guardrails are weakest.
Two patterns dominated: hypothetical scenario framing, wrapping malicious instructions inside fake training exercises, and obfuscation techniques hiding commands in JSON metadata. These are sophisticated, multi-step operations exploiting the fundamental trust models of agentic systems.
"Indirect attacks targeting agent features succeeded with fewer attempts and broader impact than direct prompt injections, highlighting external data sources as a primary risk vector."
In 2025, attackers exploited a compromised chat agent integration to breach over 700 organizations in one of the largest SaaS supply chain incidents. Separately, compromised credentials from AI agent deployments went undetected across dozens of enterprise environments for months.
Research found that cascading failures in multi-agent systems propagate faster than incident response teams can contain them. In simulations, a single compromised agent poisoned 87% of downstream decision-making within 4 hours.
Treat every AI agent as a privileged insider. Apply the same identity and access management rigor you would to a human admin. Scope permissions tightly. Monitor behavior. Rotate credentials.
Threat model your AI systems specifically. Generic app security models miss prompt injection, tool poisoning, context manipulation, and agent chain attacks. Use OWASP's LLM Top 10 and Agentic AI Top 10 as starting frameworks.
Red team before you ship. Test AI features with adversarial techniques — not just functional QA. Probe for prompt leakage, indirect injection, and privilege escalation through tool misuse.
Build governance now. The August 2026 EU AI Act deadline is closer than it feels. Start with risk classification, data lineage, and human oversight checkpoints.
The organizations that will thrive aren't the ones deploying AI fastest. They're the ones deploying it with the confidence that comes from having actually secured it.
Sources: Lakera AI Security Trends, eSecurity Planet Q4 2025 Analysis, McKinsey Global AI Survey, HiddenLayer AI Threat Landscape, Darktrace State of AI Cybersecurity, Immuta Data Security Report.
Interested in security consulting, collaboration, or just want to connect?