Hello everyone, We all have heard about web scraping. So what actually is web scraping ? It sounds cool, right? Web Scrapping :- It can defined as a code use to visit any website and collect data available on it. Usually, data is links or reports etc. We can collect data and can store it as per our requirements (For ex: Table, csv, text, etc.). WARNING: Users be carful on which websites you use your web crawlers as there are certain websites which takes web crawling or web scrapping as an illegal activity. Don't use web crawlers unless you have permission. Please read the terms and conditions as well as policy of websites. You can search more about "Legal web crawlers" on www.google.com . Before you create a crawler it is important to understand the structure of the website. Each website is different in its own way. Some website may have pages and some don't. So, you will need to modify the code accordingly. In the "spidy_with_title.py" I have tried to take it slow, wiz. from easiest level trying to take the title of the website as it is unique. In "spidy_with_page.py" I have tried to create a basic generalize web crawler or spider as most of people usually say. This code allows user to visit a website with number of pages in website and crawl each page gathering all the links and there titles. In "spidy_with_page_automate.py", I have tried to automate the process, so the crawler goes on continuously surfing from one link to other link in that link and other in that and till there are no links.