Earrapeed
Competitor SERP Analyst
2
MONTHS
2 2 MONTHS OF SERVICE
LEVEL 2
900 XP
Hello all here is another proxy scraper to help us people that dont have great quality proxies or cannot afford them but love cracking hope this helps usually provides 25-50k proxies usually after checking at least 3k Please LIKE and enjoy!
PLEASE LIKE FOR MORE DROPS!!!!
This proxy scraper comes built in with sites already provided to scrape between 25k and 50k PLEASE LIKE so i can provide more HQ drops
Installation Clone the repository:git clone
Navigate to the project directory:cd proXXy
Install the required dependencies:
pip3 install -r requirements.txt
Usage
Run the program:
python3 proXXy.py
Select the execution parameters.
Also for some people instead of python3 use python proXXy.py to run
Allow the program to complete, then check the new text files located in scraped/ directory! (After each instance of the checking process, allow the program time to join threads before moving on to the next proxy protocol.)
The program will output three files in the project directory containing the regularized proxy lists:
HTTP.txt
HTTPS.txt
SOCKS4.txt
SOCKS5.txt
along with an error output file titled error.log noting the links that were unable to be accessed.
Updating
To update the project, run:
python3 proXXy.py -u
Planned Features
Implement a feature for automatically testing the scraped proxies to verify their functionality. (2/4th completed)
Proxy sorting instead of hardcoding.
Provide an option to discern between Elite, Anonymous, and Transparent anonymity classes of proxies.
Added Features
HTTPS support!
Easy updating!
Added asynchronous webscraping.
Fixed random user agents option.
Added output folder for brevity.
Added more user parameters.
Verified proxies are written to checked file.
Improve error handling and logging for more informative feedback to the user.
Added a function to remove duplicate proxies from the generated lists.
Added a function to regularize proxies by removing trash values.
Updated the proxy scraping function to use contextlib.suppress for better error handling.
Support
Need help and can't get it to run correctly? Open an issue or contact me here
Orignal source credits and thank you
Creator:
PLEASE LIKE IT ONLY TAKES A FEW SECONDS!
PLEASE LIKE FOR MORE DROPS!!!!
This proxy scraper comes built in with sites already provided to scrape between 25k and 50k PLEASE LIKE so i can provide more HQ drops
Installation Clone the repository:git clone
You must reply in the thread to view hidden content. Upgrade your account to always see hidden content.
Install the required dependencies:
pip3 install -r requirements.txt
Usage
Run the program:
python3 proXXy.py
Select the execution parameters.
Also for some people instead of python3 use python proXXy.py to run
Allow the program to complete, then check the new text files located in scraped/ directory! (After each instance of the checking process, allow the program time to join threads before moving on to the next proxy protocol.)
The program will output three files in the project directory containing the regularized proxy lists:
HTTP.txt
HTTPS.txt
SOCKS4.txt
SOCKS5.txt
along with an error output file titled error.log noting the links that were unable to be accessed.
Updating
To update the project, run:
python3 proXXy.py -u
Planned Features
Implement a feature for automatically testing the scraped proxies to verify their functionality. (2/4th completed)
Proxy sorting instead of hardcoding.
Provide an option to discern between Elite, Anonymous, and Transparent anonymity classes of proxies.
Added Features
HTTPS support!
Easy updating!
Added asynchronous webscraping.
Fixed random user agents option.
Added output folder for brevity.
Added more user parameters.
Verified proxies are written to checked file.
Improve error handling and logging for more informative feedback to the user.
Added a function to remove duplicate proxies from the generated lists.
Added a function to regularize proxies by removing trash values.
Updated the proxy scraping function to use contextlib.suppress for better error handling.
Support
Need help and can't get it to run correctly? Open an issue or contact me here
Orignal source credits and thank you
You must reply in the thread to view hidden content. Upgrade your account to always see hidden content.
You must reply in the thread to view hidden content. Upgrade your account to always see hidden content.