appsec.fyi

AI Resources

Post Share

A curated AppSec resource library covering XSS, SQLi, SSRF, IDOR, RCE, XXE, OSINT, and more.

AI

AI security encompasses both protecting AI systems from attack and understanding the new vulnerability classes that AI introduces into applications. As organizations rapidly integrate large language models (LLMs), machine learning pipelines, and AI-powered features into their products, the attack surface has expanded in ways that traditional application security frameworks don't fully address.

Key threats to AI systems include prompt injection — where attackers manipulate LLM behavior through crafted inputs — data poisoning of training datasets, model extraction through repeated API queries, and adversarial examples that cause misclassification. Indirect prompt injection, where malicious instructions are embedded in data the AI processes (emails, documents, web pages), is emerging as one of the most significant security challenges for AI-integrated applications.

AI also introduces new categories of application risk: insecure output handling where LLM responses are rendered unsafely, excessive agency when AI agents are given too much access, sensitive information disclosure through training data leakage, and supply chain risks from fine-tuned models and third-party plugins. The OWASP Top 10 for LLM Applications provides a structured framework for understanding these risks.

On the defensive side, AI is being used to enhance security operations — automating vulnerability detection, analyzing malicious patterns, and accelerating incident response.

This page collects AI security research, LLM vulnerability techniques, defensive strategies, and resources covering the intersection of artificial intelligence and application security.

Date Added Link Excerpt
2026-04-11 NEW 2026DPRK Threat Actor Compromises Axios NPM PackageDPRK Threat Actor Compromises Axios NPM Package
2026-04-11 NEW 202616 Minutes to Impact: npm crypto-draining malware16 Minutes to Impact: npm crypto-draining malware
2026-04-11 NEW 2026Widespread npm Supply Chain Attack: Billions at RiskWidespread npm Supply Chain Attack: Billions at Risk
2026-04-11 NEW 2026npm Supply Chain Attack: debug, chalk, and Beyondnpm Supply Chain Attack: debug, chalk, and Beyond
2026-04-11 NEW 2026The Nx s1ngularity Attack: Inside the Credential LeakThe Nx s1ngularity Attack: Inside the Credential Leak
2026-04-11 NEW 2026s1ngularity: Nx supply chain attack leaks secretss1ngularity: Nx supply chain attack leaks secrets
2026-04-11 NEW 2026CISA 2025 Minimum Elements for SBOMCISA 2025 Minimum Elements for SBOM
2026-04-11 NEW 2026SLSA 3 Compliance with GitHub Actions and SigstoreSLSA 3 Compliance with GitHub Actions and Sigstore
2026-04-11 NEW 2026cosign Verification of npm Provenance and GitHub Attestationscosign Verification of npm Provenance and GitHub Attestations
2026-04-11 NEW 2026Securing CI/CD After tj-actions and reviewdog AttacksSecuring CI/CD After tj-actions and reviewdog Attacks
2026-04-11 NEW 2026GitHub Actions Supply Chain Attack: Coinbase to tj-actionsGitHub Actions Supply Chain Attack: Coinbase to tj-actions
2026-04-11 NEW 2026tj-actions/changed-files supply chain attacktj-actions/changed-files supply chain attack
2026-04-11 NEW 2026tj-actions/changed-files compromise (CVE-2025-30066)tj-actions/changed-files compromise (CVE-2025-30066)
2026-04-11 NEW 2026XZ Backdoor CVE-2024-3094 - JFrogXZ Backdoor CVE-2024-3094 - JFrog
2026-04-11 NEW 2026xz Backdoor CVE-2024-3094 - OpenSSFxz Backdoor CVE-2024-3094 - OpenSSF
2026-04-11 NEW 2026XZ Utils backdoor (CVE-2024-3094) overviewXZ Utils backdoor (CVE-2024-3094) overview
2026-04-11 NEW 2026Ultralytics PyPI package delivers coinminerUltralytics PyPI package delivers coinminer
2026-04-11 NEW 2026Supply-chain attack analysis: UltralyticsSupply-chain attack analysis: Ultralytics
2026-04-11 NEW 2026GitLab discovers widespread npm supply chain attackGitLab discovers widespread npm supply chain attack
2026-04-11 NEW 2026Shai-Hulud: Self-Replicating Worm Compromises 500+ NPM PackagesShai-Hulud: Self-Replicating Worm Compromises 500+ NPM Packages
2026-04-11 NEW 2026Shai-Hulud npm supply chain attack overviewShai-Hulud npm supply chain attack overview
2026-04-11 NEW 2026Shai-Hulud Worm Compromises npm EcosystemShai-Hulud Worm Compromises npm Ecosystem
2026-04-11 NEW 2026Shai-Hulud 2.0: 25K+ Repos ExposedShai-Hulud 2.0: 25K+ Repos Exposed
2026-04-11 NEW 2026Shai-Hulud 2.0: Detection and Defense GuidanceShai-Hulud 2.0: Detection and Defense Guidance
2026-04-11 NEW 2026Shai-Hulud 2.0 npm worm: analysisShai-Hulud 2.0 npm worm: analysis
2026-04-11 NEW 2026LLM Red Teaming Guide (Open Source) - PromptfooLLM Red Teaming Guide (Open Source) - Promptfoo
2026-04-11 NEW 2026Defining LLM Red Teaming - NVIDIA Technical BlogDefining LLM Red Teaming - NVIDIA Technical Blog
2026-04-11 NEW 2026Large Reasoning Models are Autonomous Jailbreak AgentsLarge Reasoning Models are Autonomous Jailbreak Agents
2026-04-11 NEW 2026Involuntary Jailbreak: On Self-Prompting AttacksInvoluntary Jailbreak: On Self-Prompting Attacks
2026-04-11 NEW 2026Single Line of Code Can Jailbreak 11 AI Models Including ChatGPT, Claude, GeminiSingle Line of Code Can Jailbreak 11 AI Models Including ChatGPT, Claude, Gemini
2026-04-11 NEW 2026OWASP Top 10 for LLMs 2025: Key Risks and Mitigation StrategiesOWASP Top 10 for LLMs 2025: Key Risks and Mitigation Strategies
2026-04-11 NEW 2026OWASP Top 10 for LLM Applications 2025OWASP Top 10 for LLM Applications 2025
2026-04-11 NEW 2026Practical Poisoning Attacks against Retrieval-Augmented GenerationPractical Poisoning Attacks against Retrieval-Augmented Generation
2026-04-11 NEW 2026RAG Safety: Exploring Knowledge Poisoning Attacks to RAGRAG Safety: Exploring Knowledge Poisoning Attacks to RAG
2026-04-11 NEW 2026Benchmarking Poisoning Attacks against Retrieval-Augmented GenerationBenchmarking Poisoning Attacks against Retrieval-Augmented Generation
2026-04-11 NEW 2026Q4 2025 AI Agent Security TrendsQ4 2025 AI Agent Security Trends
2026-04-11 NEW 2026OWASP GenAI Top 10 Risks and Mitigations for Agentic AI SecurityOWASP GenAI Top 10 Risks and Mitigations for Agentic AI Security
2026-04-11 NEW 2026AI Agent Attacks in Q4 2025 Signal New Risks for 2026AI Agent Attacks in Q4 2025 Signal New Risks for 2026
2026-04-11 NEW 2026Protecting Against Indirect Prompt Injection Attacks in MCPProtecting Against Indirect Prompt Injection Attacks in MCP
2026-04-11 NEW 2026Indirect Prompt Injection Attacks: Hidden AI RisksIndirect Prompt Injection Attacks: Hidden AI Risks
2026-04-11 NEW 2026Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the WildFooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild
2026-04-11 NEW 2026Anatomy of an Indirect Prompt InjectionAnatomy of an Indirect Prompt Injection
2026-04-11 NEW 2026Critical RCE Vulnerability in mcp-remote: CVE-2025-6514Critical RCE Vulnerability in mcp-remote: CVE-2025-6514
2026-04-11 NEW 2026New Prompt Injection Attack Vectors Through MCP SamplingNew Prompt Injection Attack Vectors Through MCP Sampling
2026-04-11 NEW 2026A Timeline of Model Context Protocol (MCP) Security BreachesA Timeline of Model Context Protocol (MCP) Security Breaches
2026-04-11 NEW 2026The Vulnerable MCP Project: Comprehensive MCP Security DatabaseThe Vulnerable MCP Project: Comprehensive MCP Security Database
2026-04-11 NEW 2026MCP Security: Critical Vulnerabilities Every CISO Must Address in 2025MCP Security: Critical Vulnerabilities Every CISO Must Address in 2025
2026-04-11 NEW 2026OWASP LLM Prompt Injection Prevention Cheat SheetOWASP LLM Prompt Injection Prevention Cheat Sheet
2026-04-11 NEW 2026Attention Tracker: Detecting Prompt Injection Attacks in LLMsAttention Tracker: Detecting Prompt Injection Attacks in LLMs
2026-04-11 NEW 2026How Microsoft Defends Against Indirect Prompt Injection AttacksHow Microsoft Defends Against Indirect Prompt Injection Attacks
2026-04-10 NEW 2026MCP Security Vulnerabilities: Prompt Injection and Tool PoisoningMCP Security Vulnerabilities: Prompt Injection and Tool Poisoning
2026-04-10 NEW 2026How Agentic Tool Chain Attacks Threaten AI Agent SecurityHow Agentic Tool Chain Attacks Threaten AI Agent Security
2026-04-10 NEW 20268,000+ MCP Servers Exposed: The Agentic AI Security Crisis of 20268,000+ MCP Servers Exposed: The Agentic AI Security Crisis of 2026
2026-04-10 NEW 2026Agentic AI Security in Production: MCP, Memory Poisoning, Tool MisuseAgentic AI Security in Production: MCP, Memory Poisoning, Tool Misuse
2026-04-10 NEW 2026Offensive Security for MCP Servers: How to Prevent AI Agent ExploitsOffensive Security for MCP Servers: How to Prevent AI Agent Exploits
2026-04-10 NEW 2026The New AI Attack Surface: 3 AI Security Predictions for 2026The New AI Attack Surface: 3 AI Security Predictions for 2026
2026-04-10 NEW 2026Introduction to Data Poisoning: A 2026 PerspectiveIntroduction to Data Poisoning: A 2026 Perspective
2026-04-10 NEW 2026AI Security Research — December 2025AI Security Research — December 2025
2026-04-10 NEW 2026From Prompt Injections to Protocol Exploits in LLM Agent WorkflowsFrom Prompt Injections to Protocol Exploits in LLM Agent Workflows
2026-04-10 NEW 2026LLM Security Guide: OWASP GenAI Top-10 RisksLLM Security Guide: OWASP GenAI Top-10 Risks
2026-04-10 NEW 2026Supply Chain Attacks Are Exploiting Our AssumptionsSupply Chain Attacks Are Exploiting Our Assumptions
2026-04-10 NEW 2026Protecting Your Software Supply Chain: Typosquatting and Dependency ConfusionProtecting Your Software Supply Chain: Typosquatting and Dependency Confusion
2026-04-10 NEW 2026LiteLLM PyPI Packages Compromised in TeamPCP Supply Chain AttacksLiteLLM PyPI Packages Compromised in TeamPCP Supply Chain Attacks
2026-04-10 NEW 2026Supply-Chain Attack Defense: Developer Host Machine HardeningSupply-Chain Attack Defense: Developer Host Machine Hardening
2026-04-10 NEW 2026TeamPCP Credential Infostealer Chain Attack Reaches Python's LiteLLMTeamPCP Credential Infostealer Chain Attack Reaches Python's LiteLLM
2026-04-10 NEW 2026Compromised dYdX npm and PyPI Packages Deliver Wallet StealersCompromised dYdX npm and PyPI Packages Deliver Wallet Stealers
2026-04-10 NEW 2026N. Korean Hackers Spread 1,700 Malicious Packages Across npm, PyPI, Go, RustN. Korean Hackers Spread 1,700 Malicious Packages Across npm, PyPI, Go, Rust
2026-04-10 NEW 2026The Next Wave of Supply Chain Attacks: NPM, PyPI, and Docker HubThe Next Wave of Supply Chain Attacks: NPM, PyPI, and Docker Hub
2026-04-10 NEW 2026PyPI, npm, and the New Frontline of Software Supply Chain AttacksPyPI, npm, and the New Frontline of Software Supply Chain Attacks
2026-04-10 NEW 2026Malicious PyPI and npm Packages Exploiting Dependencies in Supply Chain AttacksMalicious PyPI and npm Packages Exploiting Dependencies in Supply Chain Attacks
2026-04-10 NEW 2026Supply Chain Attack: How Attackers Weaponize SoftwareSupply Chain Attack: How Attackers Weaponize Software
2026-04-10 NEW 20262026 Supply Chain Security Report: Attack Analysis2026 Supply Chain Security Report: Attack Analysis
2026-04-10 NEW 2026Securing Software Supply Chains: 2026 PrioritiesSecuring Software Supply Chains: 2026 Priorities
2026-04-10 NEW 20262026 Software Supply Chain Report2026 Software Supply Chain Report
2026-04-10 NEW 2026Supply Chain Attacks 2025-2026: Axios, Shai-Hulud, and MoreSupply Chain Attacks 2025-2026: Axios, Shai-Hulud, and More
2026-04-10 NEW 2026Prompt Injection Attacks in LLMs: A Comprehensive ReviewPrompt Injection Attacks in LLMs: A Comprehensive Review
2026-04-10 NEW 2026Prompt Injection Attacks: Examples, Techniques, and DefencePrompt Injection Attacks: Examples, Techniques, and Defence
2026-04-10 NEW 2026Indirect Prompt Injection: The Hidden ThreatIndirect Prompt Injection: The Hidden Threat
2026-04-10 NEW 2026AI Agent Security in 2026: Prompt Injection and Memory PoisoningAI Agent Security in 2026: Prompt Injection and Memory Poisoning
2026-04-10 NEW 2026Prompt Injection Attacks in 2025: Vulnerabilities and DefensePrompt Injection Attacks in 2025: Vulnerabilities and Defense
2026-04-10 NEW 2026Prompt Injection: The Most Common AI Exploit in 2025Prompt Injection: The Most Common AI Exploit in 2025
2026-04-10 NEW 2026AI Prompt Injection Attacks: How They Work (2026)AI Prompt Injection Attacks: How They Work (2026)
2026-04-10 NEW 2026LLM Security Risks in 2026: Prompt Injection, RAG, and Shadow AILLM Security Risks in 2026: Prompt Injection, RAG, and Shadow AI
2026-04-06 NEW 2026How to Prevent OWASP Software Supply Chain FailuresHow to Prevent OWASP Software Supply Chain Failures
2026-04-06 NEW 2026Axios Compromise on npm Introduces Hidden Malicious PackageAxios Compromise on npm Introduces Hidden Malicious Package
2026-04-06 NEW 2026NPM Supply Chain Attacks Explained: Dependency Confusion Exploits and DefenseNPM Supply Chain Attacks Explained: Dependency Confusion Exploits and Defense
2026-04-06 NEW 2026Axios npm Package Compromised in Supply Chain AttackAxios npm Package Compromised in Supply Chain Attack
2026-04-06 NEW 2026The 2026 Guide to Software Supply Chain SecurityThe 2026 Guide to Software Supply Chain Security
2026-04-06 NEW 2026Best AI Security Tools in 2026Best AI Security Tools in 2026
2026-04-06 NEW 2026Navigating Amazon Bedrock's Multi-Agent ApplicationsNavigating Amazon Bedrock's Multi-Agent Applications
2026-04-06 NEW 2026OWASP Top 10 for Agents 2026OWASP Top 10 for Agents 2026
2026-04-06 NEW 2026Google Workspace's Continuous Approach to Mitigating Prompt InjectionGoogle Workspace's Continuous Approach to Mitigating Prompt Injection
2026-04-06 NEW 2026Prompt Injection Attacks in LLMs: What Developers Need to Know in 2026Prompt Injection Attacks in LLMs: What Developers Need to Know in 2026
2026-04-03 2026Prompt Injection Attacks in LLMs: Vulnerabilities, Exploitation & DefensePrompt Injection Attacks in LLMs: Vulnerabilities, Exploitation & Defense
2026-04-03 2026How AI Red Teaming Fixes Vulnerabilities in Your AI SystemsHow AI Red Teaming Fixes Vulnerabilities in Your AI Systems
2026-04-03 2026What Is Prompt Injection in AI? Examples & Prevention | EC-CouncilWhat Is Prompt Injection in AI? Examples & Prevention | EC-Council
2026-04-03 2026Prompt Injection Attacks in 2025: Risks, Defenses & TestingPrompt Injection Attacks in 2025: Risks, Defenses & Testing
2026-04-03 2026Red Teaming the Mind of the Machine: Evaluation of Prompt Injection and Jailbreak VulnerabilitiesRed Teaming the Mind of the Machine: Evaluation of Prompt Injection and Jailbreak Vulnerabilities
2026-04-03 2026Practical LLM Security Advice from the NVIDIA AI Red TeamPractical LLM Security Advice from the NVIDIA AI Red Team
2026-04-03 2026OWASP Top 10 for LLMs 2025 | DeepTeam Red Teaming FrameworkOWASP Top 10 for LLMs 2025 | DeepTeam Red Teaming Framework
2026-04-03 2026Continuously Hardening ChatGPT Against Prompt Injection | OpenAIContinuously Hardening ChatGPT Against Prompt Injection | OpenAI
2026-04-03 2026Red Teaming LLMs Exposes a Harsh Truth About the AI Security Arms RaceRed Teaming LLMs Exposes a Harsh Truth About the AI Security Arms Race
2026-04-03 2026LLM01:2025 Prompt Injection | OWASP Gen AI SecurityLLM01:2025 Prompt Injection | OWASP Gen AI Security
2026-04-03 202612 Months That Changed Supply Chain Security - 2025 Month by Month12 Months That Changed Supply Chain Security - 2025 Month by Month
2026-04-03 2026Securing the Software Supply Chain: OpenSSF, SLSA, SBOM, and SigstoreSecuring the Software Supply Chain: OpenSSF, SLSA, SBOM, and Sigstore
2026-04-03 2026OWASP Top 10 2025: A03 Software Supply Chain Failures (Beginner's Guide)OWASP Top 10 2025: A03 Software Supply Chain Failures (Beginner's Guide)
2026-04-03 2026SLSA Framework: The Definitive Guide for Securing Your Software Supply ChainSLSA Framework: The Definitive Guide for Securing Your Software Supply Chain
2026-04-03 2026Five Key Flaws Exploited in 2025's Software Supply Chain IncidentsFive Key Flaws Exploited in 2025's Software Supply Chain Incidents
2026-04-03 2026Predictions for Open Source Security in 2025 | OpenSSFPredictions for Open Source Security in 2025 | OpenSSF
2026-04-03 2026Supply Chain Attacks in Q4 2025: From Isolated Incidents to Systemic Failure ModesSupply Chain Attacks in Q4 2025: From Isolated Incidents to Systemic Failure Modes
2026-04-03 2026Supply Chain Security in CI: SBOMs, SLSA, and SigstoreSupply Chain Security in CI: SBOMs, SLSA, and Sigstore
2026-04-03 2026SLSA - Supply-chain Levels for Software ArtifactsSLSA - Supply-chain Levels for Software Artifacts
2026-04-03 2026A03 Software Supply Chain Failures - OWASP Top 10:2025A03 Software Supply Chain Failures - OWASP Top 10:2025
2026-04-03 2026What is Supply Chain Security? | GlossarySupply chain security focuses on risk management of external suppliers, vendors, logistics, and transportation.
2026-02-23 2026ottosulin/awesome-ai-security: A collection of awesome resources related AI securityThe content is a collection of resources related to AI security compiled by ottosulin. It is available on the GitHub repository ottosulin/awesome-ai-security. The repository likely contains a curated list of tools, articles, research papers, and other materials focused on enhancing security in the field of artificial intelligence.
2025-08-22 2025Model Context Protocol (MCP): Understanding security risks and controlsThe Model Context Protocol (MCP) is a protocol developed by Anthropic that outlines the process of connecting large language models (LLMs) with external tools. It serves as a powerful tool for understanding security risks and implementing controls when integrating LLMs with other systems.

Frequently Asked Questions

What is prompt injection?
Prompt injection is an attack against applications that use large language models (LLMs). An attacker crafts input that overrides or manipulates the LLM's system instructions, causing it to perform unintended actions. Direct prompt injection targets the user input; indirect prompt injection embeds malicious instructions in data the LLM processes, such as emails or web pages.
What is the OWASP Top 10 for LLM Applications?
The OWASP Top 10 for LLM Applications identifies the most critical security risks for AI-powered applications, including prompt injection, insecure output handling, training data poisoning, model denial of service, supply chain vulnerabilities, sensitive information disclosure, insecure plugin design, excessive agency, overreliance, and model theft.
How do you secure AI-integrated applications?
Key practices include validating and sanitizing LLM outputs before rendering or executing them, implementing least-privilege access for AI agents, using guardrails to constrain model behavior, monitoring for prompt injection attempts, applying rate limiting, separating AI processing from privileged operations, and treating all LLM output as untrusted user input.

Weekly AppSec Digest

Get new resources delivered every Monday.