AI
AI security encompasses both protecting AI systems from attack and understanding the new vulnerability classes that AI introduces into applications. As organizations rapidly integrate large language models (LLMs), machine learning pipelines, and AI-powered features into their products, the attack surface has expanded in ways that traditional application security frameworks don't fully address.
Key threats to AI systems include prompt injection — where attackers manipulate LLM behavior through crafted inputs — data poisoning of training datasets, model extraction through repeated API queries, and adversarial examples that cause misclassification. Indirect prompt injection, where malicious instructions are embedded in data the AI processes (emails, documents, web pages), is emerging as one of the most significant security challenges for AI-integrated applications.
AI also introduces new categories of application risk: insecure output handling where LLM responses are rendered unsafely, excessive agency when AI agents are given too much access, sensitive information disclosure through training data leakage, and supply chain risks from fine-tuned models and third-party plugins. The OWASP Top 10 for LLM Applications provides a structured framework for understanding these risks.
On the defensive side, AI is being used to enhance security operations — automating vulnerability detection, analyzing malicious patterns, and accelerating incident response.
This page collects AI security research, LLM vulnerability techniques, defensive strategies, and resources covering the intersection of artificial intelligence and application security.
| Date Added | Link | Excerpt |
|---|---|---|
| 2026-04-11 NEW 2026 | DPRK Threat Actor Compromises Axios NPM Package | DPRK Threat Actor Compromises Axios NPM Package |
| 2026-04-11 NEW 2026 | 16 Minutes to Impact: npm crypto-draining malware | 16 Minutes to Impact: npm crypto-draining malware |
| 2026-04-11 NEW 2026 | Widespread npm Supply Chain Attack: Billions at Risk | Widespread npm Supply Chain Attack: Billions at Risk |
| 2026-04-11 NEW 2026 | npm Supply Chain Attack: debug, chalk, and Beyond | npm Supply Chain Attack: debug, chalk, and Beyond |
| 2026-04-11 NEW 2026 | The Nx s1ngularity Attack: Inside the Credential Leak | The Nx s1ngularity Attack: Inside the Credential Leak |
| 2026-04-11 NEW 2026 | s1ngularity: Nx supply chain attack leaks secrets | s1ngularity: Nx supply chain attack leaks secrets |
| 2026-04-11 NEW 2026 | CISA 2025 Minimum Elements for SBOM | CISA 2025 Minimum Elements for SBOM |
| 2026-04-11 NEW 2026 | SLSA 3 Compliance with GitHub Actions and Sigstore | SLSA 3 Compliance with GitHub Actions and Sigstore |
| 2026-04-11 NEW 2026 | cosign Verification of npm Provenance and GitHub Attestations | cosign Verification of npm Provenance and GitHub Attestations |
| 2026-04-11 NEW 2026 | Securing CI/CD After tj-actions and reviewdog Attacks | Securing CI/CD After tj-actions and reviewdog Attacks |
| 2026-04-11 NEW 2026 | GitHub Actions Supply Chain Attack: Coinbase to tj-actions | GitHub Actions Supply Chain Attack: Coinbase to tj-actions |
| 2026-04-11 NEW 2026 | tj-actions/changed-files supply chain attack | tj-actions/changed-files supply chain attack |
| 2026-04-11 NEW 2026 | tj-actions/changed-files compromise (CVE-2025-30066) | tj-actions/changed-files compromise (CVE-2025-30066) |
| 2026-04-11 NEW 2026 | XZ Backdoor CVE-2024-3094 - JFrog | XZ Backdoor CVE-2024-3094 - JFrog |
| 2026-04-11 NEW 2026 | xz Backdoor CVE-2024-3094 - OpenSSF | xz Backdoor CVE-2024-3094 - OpenSSF |
| 2026-04-11 NEW 2026 | XZ Utils backdoor (CVE-2024-3094) overview | XZ Utils backdoor (CVE-2024-3094) overview |
| 2026-04-11 NEW 2026 | Ultralytics PyPI package delivers coinminer | Ultralytics PyPI package delivers coinminer |
| 2026-04-11 NEW 2026 | Supply-chain attack analysis: Ultralytics | Supply-chain attack analysis: Ultralytics |
| 2026-04-11 NEW 2026 | GitLab discovers widespread npm supply chain attack | GitLab discovers widespread npm supply chain attack |
| 2026-04-11 NEW 2026 | Shai-Hulud: Self-Replicating Worm Compromises 500+ NPM Packages | Shai-Hulud: Self-Replicating Worm Compromises 500+ NPM Packages |
| 2026-04-11 NEW 2026 | Shai-Hulud npm supply chain attack overview | Shai-Hulud npm supply chain attack overview |
| 2026-04-11 NEW 2026 | Shai-Hulud Worm Compromises npm Ecosystem | Shai-Hulud Worm Compromises npm Ecosystem |
| 2026-04-11 NEW 2026 | Shai-Hulud 2.0: 25K+ Repos Exposed | Shai-Hulud 2.0: 25K+ Repos Exposed |
| 2026-04-11 NEW 2026 | Shai-Hulud 2.0: Detection and Defense Guidance | Shai-Hulud 2.0: Detection and Defense Guidance |
| 2026-04-11 NEW 2026 | Shai-Hulud 2.0 npm worm: analysis | Shai-Hulud 2.0 npm worm: analysis |
| 2026-04-11 NEW 2026 | LLM Red Teaming Guide (Open Source) - Promptfoo | LLM Red Teaming Guide (Open Source) - Promptfoo |
| 2026-04-11 NEW 2026 | Defining LLM Red Teaming - NVIDIA Technical Blog | Defining LLM Red Teaming - NVIDIA Technical Blog |
| 2026-04-11 NEW 2026 | Large Reasoning Models are Autonomous Jailbreak Agents | Large Reasoning Models are Autonomous Jailbreak Agents |
| 2026-04-11 NEW 2026 | Involuntary Jailbreak: On Self-Prompting Attacks | Involuntary Jailbreak: On Self-Prompting Attacks |
| 2026-04-11 NEW 2026 | Single Line of Code Can Jailbreak 11 AI Models Including ChatGPT, Claude, Gemini | Single Line of Code Can Jailbreak 11 AI Models Including ChatGPT, Claude, Gemini |
| 2026-04-11 NEW 2026 | OWASP Top 10 for LLMs 2025: Key Risks and Mitigation Strategies | OWASP Top 10 for LLMs 2025: Key Risks and Mitigation Strategies |
| 2026-04-11 NEW 2026 | OWASP Top 10 for LLM Applications 2025 | OWASP Top 10 for LLM Applications 2025 |
| 2026-04-11 NEW 2026 | Practical Poisoning Attacks against Retrieval-Augmented Generation | Practical Poisoning Attacks against Retrieval-Augmented Generation |
| 2026-04-11 NEW 2026 | RAG Safety: Exploring Knowledge Poisoning Attacks to RAG | RAG Safety: Exploring Knowledge Poisoning Attacks to RAG |
| 2026-04-11 NEW 2026 | Benchmarking Poisoning Attacks against Retrieval-Augmented Generation | Benchmarking Poisoning Attacks against Retrieval-Augmented Generation |
| 2026-04-11 NEW 2026 | Q4 2025 AI Agent Security Trends | Q4 2025 AI Agent Security Trends |
| 2026-04-11 NEW 2026 | OWASP GenAI Top 10 Risks and Mitigations for Agentic AI Security | OWASP GenAI Top 10 Risks and Mitigations for Agentic AI Security |
| 2026-04-11 NEW 2026 | AI Agent Attacks in Q4 2025 Signal New Risks for 2026 | AI Agent Attacks in Q4 2025 Signal New Risks for 2026 |
| 2026-04-11 NEW 2026 | Protecting Against Indirect Prompt Injection Attacks in MCP | Protecting Against Indirect Prompt Injection Attacks in MCP |
| 2026-04-11 NEW 2026 | Indirect Prompt Injection Attacks: Hidden AI Risks | Indirect Prompt Injection Attacks: Hidden AI Risks |
| 2026-04-11 NEW 2026 | Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild | Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild |
| 2026-04-11 NEW 2026 | Anatomy of an Indirect Prompt Injection | Anatomy of an Indirect Prompt Injection |
| 2026-04-11 NEW 2026 | Critical RCE Vulnerability in mcp-remote: CVE-2025-6514 | Critical RCE Vulnerability in mcp-remote: CVE-2025-6514 |
| 2026-04-11 NEW 2026 | New Prompt Injection Attack Vectors Through MCP Sampling | New Prompt Injection Attack Vectors Through MCP Sampling |
| 2026-04-11 NEW 2026 | A Timeline of Model Context Protocol (MCP) Security Breaches | A Timeline of Model Context Protocol (MCP) Security Breaches |
| 2026-04-11 NEW 2026 | The Vulnerable MCP Project: Comprehensive MCP Security Database | The Vulnerable MCP Project: Comprehensive MCP Security Database |
| 2026-04-11 NEW 2026 | MCP Security: Critical Vulnerabilities Every CISO Must Address in 2025 | MCP Security: Critical Vulnerabilities Every CISO Must Address in 2025 |
| 2026-04-11 NEW 2026 | OWASP LLM Prompt Injection Prevention Cheat Sheet | OWASP LLM Prompt Injection Prevention Cheat Sheet |
| 2026-04-11 NEW 2026 | Attention Tracker: Detecting Prompt Injection Attacks in LLMs | Attention Tracker: Detecting Prompt Injection Attacks in LLMs |
| 2026-04-11 NEW 2026 | How Microsoft Defends Against Indirect Prompt Injection Attacks | How Microsoft Defends Against Indirect Prompt Injection Attacks |
| 2026-04-10 NEW 2026 | MCP Security Vulnerabilities: Prompt Injection and Tool Poisoning | MCP Security Vulnerabilities: Prompt Injection and Tool Poisoning |
| 2026-04-10 NEW 2026 | How Agentic Tool Chain Attacks Threaten AI Agent Security | How Agentic Tool Chain Attacks Threaten AI Agent Security |
| 2026-04-10 NEW 2026 | 8,000+ MCP Servers Exposed: The Agentic AI Security Crisis of 2026 | 8,000+ MCP Servers Exposed: The Agentic AI Security Crisis of 2026 |
| 2026-04-10 NEW 2026 | Agentic AI Security in Production: MCP, Memory Poisoning, Tool Misuse | Agentic AI Security in Production: MCP, Memory Poisoning, Tool Misuse |
| 2026-04-10 NEW 2026 | Offensive Security for MCP Servers: How to Prevent AI Agent Exploits | Offensive Security for MCP Servers: How to Prevent AI Agent Exploits |
| 2026-04-10 NEW 2026 | The New AI Attack Surface: 3 AI Security Predictions for 2026 | The New AI Attack Surface: 3 AI Security Predictions for 2026 |
| 2026-04-10 NEW 2026 | Introduction to Data Poisoning: A 2026 Perspective | Introduction to Data Poisoning: A 2026 Perspective |
| 2026-04-10 NEW 2026 | AI Security Research — December 2025 | AI Security Research — December 2025 |
| 2026-04-10 NEW 2026 | From Prompt Injections to Protocol Exploits in LLM Agent Workflows | From Prompt Injections to Protocol Exploits in LLM Agent Workflows |
| 2026-04-10 NEW 2026 | LLM Security Guide: OWASP GenAI Top-10 Risks | LLM Security Guide: OWASP GenAI Top-10 Risks |
| 2026-04-10 NEW 2026 | Supply Chain Attacks Are Exploiting Our Assumptions | Supply Chain Attacks Are Exploiting Our Assumptions |
| 2026-04-10 NEW 2026 | Protecting Your Software Supply Chain: Typosquatting and Dependency Confusion | Protecting Your Software Supply Chain: Typosquatting and Dependency Confusion |
| 2026-04-10 NEW 2026 | LiteLLM PyPI Packages Compromised in TeamPCP Supply Chain Attacks | LiteLLM PyPI Packages Compromised in TeamPCP Supply Chain Attacks |
| 2026-04-10 NEW 2026 | Supply-Chain Attack Defense: Developer Host Machine Hardening | Supply-Chain Attack Defense: Developer Host Machine Hardening |
| 2026-04-10 NEW 2026 | TeamPCP Credential Infostealer Chain Attack Reaches Python's LiteLLM | TeamPCP Credential Infostealer Chain Attack Reaches Python's LiteLLM |
| 2026-04-10 NEW 2026 | Compromised dYdX npm and PyPI Packages Deliver Wallet Stealers | Compromised dYdX npm and PyPI Packages Deliver Wallet Stealers |
| 2026-04-10 NEW 2026 | N. Korean Hackers Spread 1,700 Malicious Packages Across npm, PyPI, Go, Rust | N. Korean Hackers Spread 1,700 Malicious Packages Across npm, PyPI, Go, Rust |
| 2026-04-10 NEW 2026 | The Next Wave of Supply Chain Attacks: NPM, PyPI, and Docker Hub | The Next Wave of Supply Chain Attacks: NPM, PyPI, and Docker Hub |
| 2026-04-10 NEW 2026 | PyPI, npm, and the New Frontline of Software Supply Chain Attacks | PyPI, npm, and the New Frontline of Software Supply Chain Attacks |
| 2026-04-10 NEW 2026 | Malicious PyPI and npm Packages Exploiting Dependencies in Supply Chain Attacks | Malicious PyPI and npm Packages Exploiting Dependencies in Supply Chain Attacks |
| 2026-04-10 NEW 2026 | Supply Chain Attack: How Attackers Weaponize Software | Supply Chain Attack: How Attackers Weaponize Software |
| 2026-04-10 NEW 2026 | 2026 Supply Chain Security Report: Attack Analysis | 2026 Supply Chain Security Report: Attack Analysis |
| 2026-04-10 NEW 2026 | Securing Software Supply Chains: 2026 Priorities | Securing Software Supply Chains: 2026 Priorities |
| 2026-04-10 NEW 2026 | 2026 Software Supply Chain Report | 2026 Software Supply Chain Report |
| 2026-04-10 NEW 2026 | Supply Chain Attacks 2025-2026: Axios, Shai-Hulud, and More | Supply Chain Attacks 2025-2026: Axios, Shai-Hulud, and More |
| 2026-04-10 NEW 2026 | Prompt Injection Attacks in LLMs: A Comprehensive Review | Prompt Injection Attacks in LLMs: A Comprehensive Review |
| 2026-04-10 NEW 2026 | Prompt Injection Attacks: Examples, Techniques, and Defence | Prompt Injection Attacks: Examples, Techniques, and Defence |
| 2026-04-10 NEW 2026 | Indirect Prompt Injection: The Hidden Threat | Indirect Prompt Injection: The Hidden Threat |
| 2026-04-10 NEW 2026 | AI Agent Security in 2026: Prompt Injection and Memory Poisoning | AI Agent Security in 2026: Prompt Injection and Memory Poisoning |
| 2026-04-10 NEW 2026 | Prompt Injection Attacks in 2025: Vulnerabilities and Defense | Prompt Injection Attacks in 2025: Vulnerabilities and Defense |
| 2026-04-10 NEW 2026 | Prompt Injection: The Most Common AI Exploit in 2025 | Prompt Injection: The Most Common AI Exploit in 2025 |
| 2026-04-10 NEW 2026 | AI Prompt Injection Attacks: How They Work (2026) | AI Prompt Injection Attacks: How They Work (2026) |
| 2026-04-10 NEW 2026 | LLM Security Risks in 2026: Prompt Injection, RAG, and Shadow AI | LLM Security Risks in 2026: Prompt Injection, RAG, and Shadow AI |
| 2026-04-06 NEW 2026 | How to Prevent OWASP Software Supply Chain Failures | How to Prevent OWASP Software Supply Chain Failures |
| 2026-04-06 NEW 2026 | Axios Compromise on npm Introduces Hidden Malicious Package | Axios Compromise on npm Introduces Hidden Malicious Package |
| 2026-04-06 NEW 2026 | NPM Supply Chain Attacks Explained: Dependency Confusion Exploits and Defense | NPM Supply Chain Attacks Explained: Dependency Confusion Exploits and Defense |
| 2026-04-06 NEW 2026 | Axios npm Package Compromised in Supply Chain Attack | Axios npm Package Compromised in Supply Chain Attack |
| 2026-04-06 NEW 2026 | The 2026 Guide to Software Supply Chain Security | The 2026 Guide to Software Supply Chain Security |
| 2026-04-06 NEW 2026 | Best AI Security Tools in 2026 | Best AI Security Tools in 2026 |
| 2026-04-06 NEW 2026 | Navigating Amazon Bedrock's Multi-Agent Applications | Navigating Amazon Bedrock's Multi-Agent Applications |
| 2026-04-06 NEW 2026 | OWASP Top 10 for Agents 2026 | OWASP Top 10 for Agents 2026 |
| 2026-04-06 NEW 2026 | Google Workspace's Continuous Approach to Mitigating Prompt Injection | Google Workspace's Continuous Approach to Mitigating Prompt Injection |
| 2026-04-06 NEW 2026 | Prompt Injection Attacks in LLMs: What Developers Need to Know in 2026 | Prompt Injection Attacks in LLMs: What Developers Need to Know in 2026 |
| 2026-04-03 2026 | Prompt Injection Attacks in LLMs: Vulnerabilities, Exploitation & Defense | Prompt Injection Attacks in LLMs: Vulnerabilities, Exploitation & Defense |
| 2026-04-03 2026 | How AI Red Teaming Fixes Vulnerabilities in Your AI Systems | How AI Red Teaming Fixes Vulnerabilities in Your AI Systems |
| 2026-04-03 2026 | What Is Prompt Injection in AI? Examples & Prevention | EC-Council | What Is Prompt Injection in AI? Examples & Prevention | EC-Council |
| 2026-04-03 2026 | Prompt Injection Attacks in 2025: Risks, Defenses & Testing | Prompt Injection Attacks in 2025: Risks, Defenses & Testing |
| 2026-04-03 2026 | Red Teaming the Mind of the Machine: Evaluation of Prompt Injection and Jailbreak Vulnerabilities | Red Teaming the Mind of the Machine: Evaluation of Prompt Injection and Jailbreak Vulnerabilities |
| 2026-04-03 2026 | Practical LLM Security Advice from the NVIDIA AI Red Team | Practical LLM Security Advice from the NVIDIA AI Red Team |
| 2026-04-03 2026 | OWASP Top 10 for LLMs 2025 | DeepTeam Red Teaming Framework | OWASP Top 10 for LLMs 2025 | DeepTeam Red Teaming Framework |
| 2026-04-03 2026 | Continuously Hardening ChatGPT Against Prompt Injection | OpenAI | Continuously Hardening ChatGPT Against Prompt Injection | OpenAI |
| 2026-04-03 2026 | Red Teaming LLMs Exposes a Harsh Truth About the AI Security Arms Race | Red Teaming LLMs Exposes a Harsh Truth About the AI Security Arms Race |
| 2026-04-03 2026 | LLM01:2025 Prompt Injection | OWASP Gen AI Security | LLM01:2025 Prompt Injection | OWASP Gen AI Security |
| 2026-04-03 2026 | 12 Months That Changed Supply Chain Security - 2025 Month by Month | 12 Months That Changed Supply Chain Security - 2025 Month by Month |
| 2026-04-03 2026 | Securing the Software Supply Chain: OpenSSF, SLSA, SBOM, and Sigstore | Securing the Software Supply Chain: OpenSSF, SLSA, SBOM, and Sigstore |
| 2026-04-03 2026 | OWASP Top 10 2025: A03 Software Supply Chain Failures (Beginner's Guide) | OWASP Top 10 2025: A03 Software Supply Chain Failures (Beginner's Guide) |
| 2026-04-03 2026 | SLSA Framework: The Definitive Guide for Securing Your Software Supply Chain | SLSA Framework: The Definitive Guide for Securing Your Software Supply Chain |
| 2026-04-03 2026 | Five Key Flaws Exploited in 2025's Software Supply Chain Incidents | Five Key Flaws Exploited in 2025's Software Supply Chain Incidents |
| 2026-04-03 2026 | Predictions for Open Source Security in 2025 | OpenSSF | Predictions for Open Source Security in 2025 | OpenSSF |
| 2026-04-03 2026 | Supply Chain Attacks in Q4 2025: From Isolated Incidents to Systemic Failure Modes | Supply Chain Attacks in Q4 2025: From Isolated Incidents to Systemic Failure Modes |
| 2026-04-03 2026 | Supply Chain Security in CI: SBOMs, SLSA, and Sigstore | Supply Chain Security in CI: SBOMs, SLSA, and Sigstore |
| 2026-04-03 2026 | SLSA - Supply-chain Levels for Software Artifacts | SLSA - Supply-chain Levels for Software Artifacts |
| 2026-04-03 2026 | A03 Software Supply Chain Failures - OWASP Top 10:2025 | A03 Software Supply Chain Failures - OWASP Top 10:2025 |
| 2026-04-03 2026 | What is Supply Chain Security? | Glossary | Supply chain security focuses on risk management of external suppliers, vendors, logistics, and transportation. |
| 2026-02-23 2026 | ottosulin/awesome-ai-security: A collection of awesome resources related AI security | The content is a collection of resources related to AI security compiled by ottosulin. It is available on the GitHub repository ottosulin/awesome-ai-security. The repository likely contains a curated list of tools, articles, research papers, and other materials focused on enhancing security in the field of artificial intelligence. |
| 2025-08-22 2025 | Model Context Protocol (MCP): Understanding security risks and controls | The Model Context Protocol (MCP) is a protocol developed by Anthropic that outlines the process of connecting large language models (LLMs) with external tools. It serves as a powerful tool for understanding security risks and implementing controls when integrating LLMs with other systems. |
Frequently Asked Questions
- What is prompt injection?
- Prompt injection is an attack against applications that use large language models (LLMs). An attacker crafts input that overrides or manipulates the LLM's system instructions, causing it to perform unintended actions. Direct prompt injection targets the user input; indirect prompt injection embeds malicious instructions in data the LLM processes, such as emails or web pages.
- What is the OWASP Top 10 for LLM Applications?
- The OWASP Top 10 for LLM Applications identifies the most critical security risks for AI-powered applications, including prompt injection, insecure output handling, training data poisoning, model denial of service, supply chain vulnerabilities, sensitive information disclosure, insecure plugin design, excessive agency, overreliance, and model theft.
- How do you secure AI-integrated applications?
- Key practices include validating and sanitizing LLM outputs before rendering or executing them, implementing least-privilege access for AI agents, using guardrails to constrain model behavior, monitoring for prompt injection attempts, applying rate limiting, separating AI processing from privileged operations, and treating all LLM output as untrusted user input.
Weekly AppSec Digest
Get new resources delivered every Monday.