This project is a simple Django web application that allows users to input a website URL, scrape its content for links, and display them. Users can also delete all collected links.
To run this project locally, follow these steps:
-
Clone the repository:
git clone https://github.com/your_username/link-collector.git
-
Install the required dependencies. It's recommended to use a virtual environment:
cd link-collector pip install -r requirements.txt
-
Run migrations to set up the database:
python manage.py migrate
-
Start the Django development server:
python manage.py runserver
-
Open your web browser and go to
http://127.0.0.1:8000/
to access the application.
- Enter a website address in the provided input field and click on the "Scrape" button to scrape the website for links.
- The collected links will be displayed in a table format below the input field.
- To delete all collected links, click on the "Delete" button.
- Django: The web framework used to build the application.
- Bootstrap: Frontend framework for styling the user interface.
- Beautiful Soup: Python library for web scraping.
- Requests: Python library for making HTTP requests.
myapp/
: Contains the Django application files.models.py
: Defines the database models, including theLink
model for storing scraped links.views.py
: Contains the view functions for rendering HTML templates and handling form submissions.result.html
: HTML template for displaying the main page with the input form and the list of collected links.
README.md
: Documentation file for the project.requirements.txt
: Lists all Python dependencies required to run the project.
Contributions are welcome! If you'd like to contribute to this project, please follow these steps:
- Fork the repository.
- Create a new branch (
git checkout -b feature/your-feature
). - Make your changes.
- Commit your changes (
git commit -am 'Add some feature'
). - Push to the branch (
git push origin feature/your-feature
). - Create a new Pull Request.