Welcome to the Colorado Crawler Wikipedia project, a solution designed for extracting and analyzing web data from Wikipedia pages related to Colorado. This can be used to train NLP models on words related to your needs.
- Web Scraping: Utilize
wikipedia_scraper.py
to crawl web pages and gather the data you need. - Easy Setup: Quick installation with a simple
pip install seleniumbase
- Clone the repository and set up the environment:
git clone https://github.com/LukeFarch/COCrawlerWiki.git
cd COCrawlerWiki
- Change the paths in the code as necessary to match your environment and needs.
- To start crawling Wikipedia for Colorado-related pages, run:
python wikipedia_crawler.py
- Do you wanna get a word count? This will tell you how many files are under 10 words (failed). Adjust as needed
word_count.py
-
Follow the prompts on screen to start crawling cities or counties
-
Do you have an ArcGIS API and you want to scrape all of the parameters without doing it mnaually? Use this file and specify your chrome driver path to run this selenium bot.
-
Don't know where your chromedriver is? https://www.browserstack.com/guide/run-selenium-tests-using-selenium-chromedriver
-
Windows: Run this to find it! Get-ChildItem -Path "C:\" -Recurse -Filter "chromedriver.exe" -ErrorAction SilentlyContinue
arcgis_finder.py
- Follow the prompts on screen to start crawling cities or counties