TL;DR

  • Use a simple, repeatable pipeline: discover → probe → crawl → scan.
  • Store every stage as a file so you can diff, resume, and avoid re-running noisy steps.
  • Prefer target lists over “one huge command”; it’s easier to debug and safer.

Pipeline

1) Subdomains: subfinder

subfinder -d example.com -all -silent | tee subs.txt

2) Live HTTP endpoints: httpx

cat subs.txt | httpx -silent -follow-redirects -threads 100 -timeout 10 | tee alive.txt

Useful flags (pick what you need):

cat subs.txt | httpx -silent -status-code -title -tech-detect -web-server -ip -follow-redirects | tee alive-meta.txt

3) Crawl URLs: katana (or gau for archives)

cat alive.txt | katana -silent -jc -jsl -kf -d 3 -c 20 -p 20 -timeout 10 | tee urls.txt

Archive URLs:

cat subs.txt | gau --subs | tee urls-archive.txt

Combine + dedupe:

cat urls.txt urls-archive.txt | sort -u | tee urls-all.txt

4) Scan: nuclei

Start safe:

nuclei -l alive.txt -severity low,medium,high,critical -rl 10 -c 20 -timeout 10 -silent | tee nuclei.txt

Targeted templates (recommended):

nuclei -l alive.txt -t http/exposures/ -t http/misconfiguration/ -rl 10 -c 20 -timeout 10 -silent | tee nuclei-targeted.txt

Hygiene (makes a huge difference)

  • Keep a folder per target: targets/<program>/<date>/
  • Always save: subs.txt, alive.txt, urls-all.txt, nuclei.txt
  • Don’t crank rate limits to “max”; you’ll just get blocked and waste time.