appsec.fyi

How It Works

The end-to-end pipeline behind appsec.fyi — from content discovery to your browser.

5,300+
Resources
25
Topics
2x daily
Auto-builds
0
Frameworks

Who Curates This?

I'm Carl Sampson, an application security engineer focused on vulnerability research, web security, and building tools that make AppSec easier. I founded the OWASP Indianapolis Chapter in 2005, have spoken at DerbyCon and CircleCityCon, and have done security work at Microsoft, Proofpoint, Salesforce, Teradata, and Anthem. I publish open-source tools, CVE disclosures, and technical writeups on topics like Content Security Policy, memory safety, and SSRF — find me on LinkedIn, GitHub, or X.

The Pipeline

Every resource on appsec.fyi flows through a fully automated pipeline. No CMS, no framework, no manual HTML editing. Just Python scripts, a SQLite database, and cron jobs.

1
Discover
I find articles, tools, writeups, and talks across the web — from Twitter, RSS feeds, Hacker News, Reddit, and security mailing lists. When something is worth saving, I bookmark it and file it under one of 25 topic collections. A Chrome extension lets me do this in one click without leaving the page; visitors can also propose links via the Submit page.
2
Sync & Summarize
Twice a day, a cron job pulls new bookmarks into a local SQLite database. Each new resource gets a concise summary generated by a language model — no more than 100 words, focused on what the resource covers and why it's useful. A separate classifier tags each item with a difficulty level (beginner / intermediate / advanced / news) and the primary AppSec tool it relates to, when applicable. An automated relevance filter hides off-topic entries before they ever reach the site. Duplicates are caught by URL deduplication.
3
Build
Twice a day, a build script generates the entire site from the database. Every page is static HTML — no JavaScript frameworks, no client-side rendering, no build tools. The build also produces RSS feeds, JSON endpoints, a server-side full-text search index (SQLite FTS5), the changelog, an interactive topic graph, per-publisher pages, a tools-by-tool index, embeddable iframe widgets, llms.txt, and structured data for search engines. Everything is pre-compressed with gzip and brotli, then served by nginx with HTTP/2 and HTTP/3.
4
Distribute
The site is the primary output, but content also flows to a weekly email newsletter, RSS feeds (main + per-topic + changelog), JSON endpoints that power a Discord bot and an iOS app, social posts to X via a daily auto-poster, and search engines via IndexNow pings after every build.

What Gets Built

Each cron build runs many steps in sequence. Here's what it produces:

Automation Schedule

WhenWhat
Midnight & noonPull new bookmarks, run summaries + difficulty/tool classifiers, then full site rebuild (all pages, feeds, endpoints, sitemap)
Daily 8 AMAuto-post two random evergreen links to the @appsecfyi social account
Monday 9 AMWeekly email digest sent to subscribers
Every 5 minutesNotify owner of pending user-submitted links awaiting moderation
1st of monthBroken link check — dead links are automatically hidden after two consecutive failures

Quality Control

Not everything that gets bookmarked makes it to the site. Several layers of quality control run automatically:

The Stack

Language
Python
Database
SQLite + FTS5
Web Server
nginx
Search API
Flask
Visualization
D3.js
Mobile App
iOS (SwiftUI)
Compression
gzip + brotli
Hosting
VPS + Let's Encrypt

The entire site is static HTML. No React, no Next.js, no Tailwind, no build tools. Pages load fast: critical CSS is inlined, the rest preloads, all assets carry content-hash cache-busters with one-year immutable caching, and HTML/CSS/JS are all pre-compressed at build time. The server runs on a single small VPS with HTTP/2, HTTP/3 (QUIC), and pre-compressed static files.

Why Build It This Way?

Most resource collections are either manually maintained wikis that go stale, or algorithmically generated link farms with no curation. appsec.fyi tries to be neither — human-curated content with automated infrastructure.

The pipeline is designed so that adding a new resource takes seconds (bookmark it), while everything else — summarization, page generation, distribution — happens automatically. The result is a site that stays fresh without requiring daily maintenance.