Domain & URLs

Getting Informations

Get information on an IP
whois <IP>

# Get IP address associated to a domain

# Knockknock is a small automated script allowing you to find domain names
# For a registrant (person or company)
python3 -n company -d

# Many tools

Online Passive Identification Tools

Target Mapping and Informations

# Informations about the target

# Robtex is a great and complete tool

# Getting technology informations

# Mapping the target website can be good to get an overview

Side-domains Identification

# Then you can find domains and subdomains associated to an IP by using Passive Reverse DNS

# RiskIQ Passive DNS

# You can identify new domains in the "RelationShip" section
# of Builtwith

# When you have a name or a e-mail adress you can perform a reverse whois lookup to find domain
# names oned by a person of a company

Scrapping from JS

# You can parse and scrape javascript content in a target website to look for hidden subdomains or interesting paths
# Often, endpoints are not public but users can still interact with them
# Tools like dirscraper automates this (

# Classic
python -u <url>

# Output mode
python -u <url> -o <output>

# Silent mode (you won't see result in term)
python -u <url> -s -o <output>

# Relative URL Extractor is another good tool to scrape from JS files (
ruby extract.rb
# Extract all API endpoints from AngularJS & Angular javascript files
curl -s URL | grep -Po "(\/)((?:[a-zA-Z\-_\:\.0-9\{\}]+))(\/)*((?:[a-zA-Z\-_\:\.0-9\{\}]+))(\/)((?:[a-zA-Z\-_\/\:\.0-9\{\}]+))" | sort -u
# simple script that grep infos from javascript files
python3 -u -n google

Getting page title

# One line
# Getting page title without following redirections
for i in $(cat urls_or_subdomains.txt); do echo "$i | $(curl --connect-timeout 3 $i -s -v 2>&1 | grep -Poz '((?<=title>)(.*)(?=</title>)|(?<=Location:)(.*)/|(Could not resolve host:.*))' | tr -d '\0' | sed -r 's/(https?:\/\/.*\/?)(.*)(301 Moved Permanently)/\3 \2\1/g' )"; done

Investigate a website (crosspost methodology)

# Thread by Aware Online about some website investigation methodology

# 1 - Tactical informations
# 2 - WHOIS
# 3 - Archives
# 4 - Text
# 5 - Reverse Image Search
# 6 - Images and EXIF data
# 7 - Source code
# 8 - Others TLD
# 9 - Mentions of target
# 10 - Check infos via RSS
# 11 - SSL certificates
# 12 - Robots/Sitemap
# 13 - Port scans
# 14 - Reverse IP lookup
# 15 - Reverse DNS lookup
# 16 - Monitoring changes
# 17 - Malware check

r3con1z3r (

# OSINT Tool used to perform some OSINT tests and generates a report
# HTTP Headers, Whois, Traceroute, DNS, nmap, website on the same server, Reverse IP, Page Links
# Take care, not really passiv

Domain spoofing and typosquatting

# Tools like spoofcheck ( 
# It checks SPF and DMARC records for weak configuration that allow domain spoofing
# Domain is spoofable if lack of an SPF or DMARC record, SPF record never specifies ~all or -all, DMARC policy is set to p=none or is nonexistent
# urlcrazy allow to generate typo for a given domain and will check different elements
# such as IP, country, Nameserver and MX

# Default search
$ urlcrazy

# You can also search with popularity estimate
$ urlcrazy -p

More Information Gathering

# EyeWitness (
# It can take screenshots of websites, RDP services and open VNC servers, provide some server header info and identify defualt credentials
./ -f filename --timeout optionaltimeout --open (Optional)
./EyeWitness -f urls.txt --web
./EyeWitness -x urls.xml --timeout 8 --headless
./EyeWitness -f rdp.txt --rdp
# XRay tool (
# Bruteforce subdomains using wordlist and DNS requests, then Shodan, then ViewDNS is key is provided.
# Then it will launch banner grabbing and info collectors (not passiv)
xray -shodan-key yadayadayadapicaboo... -viewdns-key foobarsomethingsomething... -domain