AI
AI security encompasses protecting AI systems, including data and models, from threats such as data poisoning, model inversion, and adversarial attacks, using methods like data encryption, robust testing, and continuous monitoring. It also involves using AI to enhance cybersecurity by identifying malicious patterns and automating responses. Key aspects include securing the entire AI lifecycle, governing AI use, and ensuring AI systems adhere to regulatory policies and responsible AI principles.
| Item | Date Added | Link | Excerpt |
|---|---|---|---|
| 2610 | 2026-02-23 11:48:43 UTC | ottosulin/awesome-ai-security: A collection of awesome resources related AI security | The content is a collection of resources related to AI security compiled by ottosulin. It is available on the GitHub repository ottosulin/awesome-ai-security. The repository likely contains a curated list of tools, articles, research papers, and other materials focused on enhancing security in the field of artificial intelligence. |
| 1449 | 2025-08-22 01:57:30 UTC | Model Context Protocol (MCP): Understanding security risks and controls | The Model Context Protocol (MCP) is a protocol developed by Anthropic that outlines the process of connecting large language models (LLMs) with external tools. It serves as a powerful tool for understanding security risks and implementing controls when integrating LLMs with other systems. |