This project is a web scraper that fetches data from multiple websites concurrently, processes the data, and stores it in a structured format.
- Python 3.x
- Node.js
- npm
- Navigate to the
backend
directory. - Install the Python dependencies by running
pip install -r requirements.txt
. - Run the backend for API by executing
uvicorn app.api:app --reload
.
- Navigate to the
frontend
directory. - Install the Node.js dependencies by running
npm install
. - Start the app in development mode by executing
quasar dev
.
After setting up both the backend and frontend, you can start using the web scraper. The frontend will be accessible at http://localhost:9000
(or the port specified in your .env
file).
More information can be found in the respective README.md
files in the backend
and frontend
directories.