/wappybird

Wappalyzer CLI tool to find Web Technologies

Primary LanguagePython

wappybird

Multithreaded Wappalyzer CLI tool to find Web Technologies, with optional CSV output.

You can also provide a directory and all scraped data will be saved with a subfolder per host

Allows multiple methods of input, including files, urls, and STDIN. Provided jsut the hostname, it will attempt to access via HTTPS and then HTTP, allowing redirects.

based originally on the wappalyzer-cli by gokulapap

Now uses the updated files from the npm-based Wappalyzer, instead of the static file from the python-Wappalyzer library.

Installation :

pip uninstall python-Wappalyzer -y || sudo pip uninstall python-Wappalyzer -y

git clone https://github.com/brandonscholet/python-Wappalyzer.git

cd python-Wappalyzer/

sudo python3 setup.py install

cd ..

git clone https://github.com/brandonscholet/wappybird

cd wappybird

sudo python3 setup.py install

Hella Input Examples:

wappy -u <URL> <URL>

wappy -f <file> <file2> -u <URL>

wappy -f <file> -u <URL> -f <file2> <file3>

subfinder -d example.com | wappy -wf <output.csv> -q -t 25

cat expanded_scope | wappy

nmap -sL -n -iL <scope_with_subnets> | awk '/Nmap scan report/{print $NF}' ) | wappy -t 25 -wf <output.csv>

echo <URL>,<URL>,<URL> | wappy -q

echo <URL> <URL> <URL> | wappy

Usage

─$ wappy -h
usage: wappy [-h] [-u URL [URL ...]] [-f FILE [FILE ...]] [-wf WRITEFILE] [-s [SCRAPE_DIR]]
             [-t THREADS] [-q] [--no-meta-refresh]

Multithreaded Web technology finder!

Optional output into CSV and can save scraped site data.

Note: This program also accepts hosts from STDIN with space, comma or newline delimiters.

options:
  -h, --help            show this help message and exit
  -u URL [URL ...], --url URL [URL ...]
                        url to find technologies
  -f FILE [FILE ...], --file FILE [FILE ...]
                        list of urls to find web technologies
  -wf WRITEFILE, --writefile WRITEFILE
                        File to write csv output to
  -s [SCRAPE_DIR], --scrape_dir [SCRAPE_DIR]
                        save all scraped data
  -t THREADS, --threads THREADS
                        How many threads yo?
  -q, --quiet           Don't want to see any errors?
  --no-meta-refresh     If meta refresh redirection breaks or is not what you want

Demo