Fusth
Social Media Maven
2
MONTHS
2 2 MONTHS OF SERVICE
LEVEL 1
400 XP

Hey Folks, as you may already know about the various web crawler tools used to crawl the documents available on the web application and one of them is going to present called “waybackurls“, which works in the same way as other web crawlers. Basically the tool accept line-delimited domains on stdin, fetch known URLs from the Wayback Machine for *.domain and output them on stdout.
Let’s take a look

Install Golang
To operate this tool, it is necessary to install the Go utility in your system, otherwise you cannot operate it. Let’s install it easily using the following command.
apt install golang1apt install golang

Waybackurls Tool Installation
Once the installation is done, we can download this tool through the Go utility and also operate it from anywhere.
go get github.com/tomnomnom/waybackurls
waybackurls -h12go get github.com/tomnomnom/waybackurlswaybackurls-h

Done

Usage

waybackurls testphp.vulnweb.com1waybackurls testphp.vulnweb.com

Exclude Subdoamin
By default it automatically fetches all subdomains of a given domain and starts spiders scan on them as well but if you only want to crawl specific given domains then you can mention “-no-subs” after gives the URL.
Usage

waybackurls testphp.vulnweb.com -no-subs1waybackurls testphp.vulnweb.com-no-subs

Save Output
There is no specific command given in this tool to save the output but if you want to save your output in txt file then you can use the following command.
Usage

waybackurls fintaxico.in > res.txt1waybackurls fintaxico.in>res.txt
