Subdomain Discovery

Google Dorks & Bing Dorks

# Google dorks are usefull for finding new subdomains
site:* -www -store -jobs -uk

# Bing also support dorks and can give others results

# When you use the Google Dork:  site:*, NEVER forget to check
# Github dorks also allow to find many subdomains
"" password
# ...

Online DNS tools and services

# VirusTotal runs its own passive DNS replication service

# DNS Dumpster can also find large number of sub-domains

Certificate Transparency

# It's a project started by Google to log and audit SSL/TLS certificates for every CA
# Interesting because you can request these certificates using some tools

# also provides public access to their CT logs using postgresql interface
psql -h -p 5432 -U guest certwatch

# Appsecco wrote some scripts to automate these queries

# You can use and massdns to find resolvable or not subdomains
./ | ./bin/massdns -r resolvers.txt -t A -q -a -o -w icann_resolvable_domains.txt -

# Another tool is ct-exposer -d

DNS Enumeration, Bruteforce and Passive Tools

# DNSRecon is a powerfull enumeration tool -h -d

# You can do dictionnary enumeration
# -n : nameserver used
# -t : type (brt=brute domain and hosts using a given dictionnary) -n -d -D subdomains-top1mil-5000.txt -t brt

# dnsenum is another tool

# Subdomains bruteforce
python -d

# DNS enumeration and information gathering using fierce (not really passiv)
fierce -dns

# Aquatone is also a great tool
# Subfinder
# Subdomain discovery tool that discovers valid subdomains for websites by using passive online sources.
$ subfinder -d -o output.txt

$ subfinder -dL domains.txt -oD ~/path/to/output

$ cat domains.txt | subfinders -subs
# Findomain

# Install
$ git clone
$ cd findomain
$ cargo build --release
$ sudo cp target/release/findomain /usr/bin/
$ findomain

$ findomain -t
# You can also uses ffuf in order to bruteforce for subdomains

# Option 1
# Redirecting to the host
$ ffuf -u -w my_wordlist -H "Host:"

# Option 2
# But you'll have to wait for the timeout
$ ffuf -u -w my_wordlist

Linked Discovery (using BurpSuite Pro)

# Using the spider, you can find lots of subdomains/roots by browsing the website

# Turn off passive scanning
# Set scope to advanced control and use "keyword" of target name (not a normal FQDN) (example : "twitch")
# Brows the site then spider all hosts recursively

# Export
# Select domains in the burp tree
# "Engagement Tools" --> "Analyze Target"
# Save report as HTML
# Copy hosts from the HTML


# OWASP Amass tool suite is used to build a network map of the target
# It relies for subdomain enumeration on scrapping data-sources, recursive bruteforcing, crawling web services, permuting names and reverse DNS sweeping

# Basic use (DNS lookups and name alterations)
amass -d

# Brute force subdomain enumeration and print data sources and ip for discovered names
amass -src -ip -brute -d

# Passiv mode
amass -passiv -d

# Importing Amass results into Maltego
amass -src -ip --active -brute -do owasp.json -d
amass.viz -i owasp.json --maltego owasp.csv
→ Then import csv files


# SubDomainizer is a tool designed to find hidden subdomains and secrets present in
# either webpage, Github, and external javascripts present in the given URL. 
# This tool also finds S3 buckets, cloudfront URL's and more from those JS files which 
# could be interesting like S3 bucket is open to read/write, or subdomain takeover and similar case for cloudfront

# To find subdomains, s3 buckets, and cloudfront URL's for given single URL:
python3 -u

# To find subdomains from given list of URL (file given):
python3 -l list.txt

# To save the results in (output.txt) file:
python3 -u -o output.txt

# To give cookies:
python3 -u -c "test=1; test=2"

# To scan via github:
python3 -u -o output.txt -gt <github_token> -g 

# No SSL Certificate Verification:
python3 -u -o output.txt -gt <github_token> -g  -k


# Knockpy is a python tool designed to enumerate subdomains on a target 
# domain through a wordlist. It is designed to scan for DNS zone transfer 
# and to try to bypass the wildcard DNS record automatically if it is enabled. 
# Now knockpy supports queries to VirusTotal subdomains, you can set 
# the API_KEY within the config.json file.

# Classic research with internal wordlist

# Use a specific wordlist
knockpy -w wordlist.txt

# Resolve IP
knockpy -r or IP

# Save output to CSV
knockpy -c

# export in JSON
knockpy -j

Permutation Scanning

# Another technique is to use permutations, alterations and mutations of known subdomains to find new ones
# Tools like can do that
# -i : already known subdomains used in input
# -o : file containing a massiv list of permuted subdomains
# -w : words usef for permutations
# -r : resolve each newly generated subdomain 
# -s : result for the resolved permuted subdomains -i -o data_output -w icann.words -r -s results_output.txt 

Autonomous System (AS) Numbers

# You can use ASN to identify netblocks belonging to a company

# Resolve the IP of a given domain, using dig or host

# Then look for the ASN associated (given an IP address) (given a domain name)

# Then you can use nmap NSE scripts to find netblocks for an ASN
nmap --script targets-asn --script-args targets-asn.asn=17012 > netblocks.txt

# You will get range networks on which you can investigate more

Zone Transfer

# DNS transaction where a DNS Server gives a full/part copy of it's zone file to another DNS Server
# If bad configuration, anyone can initiate a zone transfer and so access to the zone file

dig +multi AXFR

Subdomain Takeover

# Resource
# Subjack is a tool used to automate subdomain takeover discovery (
./subjack -w subdomains.txt -t 100 -timeout 30 -o results.txt -ssl

DNSSEC Walking

# To prove things that do not exist, DNSSEC list all the things that exist
# If the list is proven, then everything not listed must not exist
# This list is created by the NSEC (or NSEC3) records

# Interesting side-effect, it allows anyone to list the zone content by following the linked list of NSEC records.
→ zone walking

# Tools like ldns-walk can exploit this

# Some DNSSEC zones uses NSEC3 → Hashed domain names to prevent attackers
# It's still possible to collect hashes and crack them
# Tools like nsec3walker
./collect >
cat | grep "icann" | awk '{print $2;}'

Project Collecting Data

# Some projects collect data and provide them to the community
# ForwardDNS dataset is a part of Project Sonar
# This dataset is created by extracting domain names from a number of sources and then sending an ANY query for each domain
# You can download the dataset and parse it but IT'S HUGE (20Gb compressed, 300+ Gb uncompressed)
# It seems to be the best solution to find subdomains

curl -silent | pigz -dc | grep “” | jq

cat 20170417-fdns.json.gz | pigz -dc | grep "" | jq
# SonarSearch is a project built arond the Rapid7 Sonar's database

# An API instance is available at

# Crobat is a CLI tool designed to query this API
$ crobat -h                                                                                                                                                                      
Usage of crobat:
  -r string
        Perform reverse lookup on IP address or CIDR range. Supports files and quoted lists
  -s string
        Get subdomains for this value. Supports files and quoted lists
  -t string
        Get tlds for this value. Supports files and quoted lists
  -u    Ensures results are unique, may cause instability on large queries due to RAM requirements

ASN/Burp example

# Get ASN for your target

# Then you a use amass to find top level domains
amass intel -active -asn 361XXXX

# Then look on crunchbase for acquisitions

# Link discovery in Burp
# BurpSuite History too
# Set target, go on the main website, then look for requests made to different domains
# Select all and send them to spider --> Discover new ones
# Doing it from a VPN can be better to avoid IP ban

# Reverse whois / By owner

# Test HTTP for subdomains
# To get domain running webservers
cat subdomains_list | httprobe

# Using aquatone to get screenshots of discovered sites
cat subdomains_servers | aquatone

# OpenList Firefox extension can help to visualize websites

# Content Discovery
# Ffuf is one of the fastest web fuzzer but not recursive
# Dirsearch is good for recursive
ffuf -u site/FUZZ -w wordlist
ffuf -u site/FUZZ/HERE -w wordlist