This project aims to develop a simple Crawler-based search engine demonstrating key features such as web crawling, indexing, ranking, and search functionality. The implementation is done in Java with the Spring Boot framework.
- Java: Main programming language for the backend modules (crawler, indexer, etc.).
- Spring Boot: Framework used for building and running the backend services.
- React: JavaScript library for building the user interface and interactive components.
- MongoDB: for storing and managing the indexed data.
The web crawler is responsible for collecting documents from the web. Multithreaded implementation with user-controlled thread settings. Careful handling of URL normalization and duplicate prevention. Respect for robots.txt exclusions. The crawler maintains state for interrupted processes.
Converts downloaded HTML documents into an indexed data structure. Persistence in secondary storage. Optimized for fast retrieval of documents containing specific words. Supports incremental updates with newly crawled documents.
Handles search queries and performs necessary preprocessing. Retrieves documents containing words sharing the same stem as those in the search query.
Supports phrase searching with quotation marks. Results with quotation marks return a subset of results without quotation marks.
Sorts documents based on relevance and popularity. Calculates relevance using methods like tf-idf and aggregates scores. Utilizes ranking algorithms for page popularity.
Implements a web interface for user queries. Displays results with snippets containing query words. Pagination of results and a suggestion mechanism for popular query completions.
Supports AND/OR/NOT in phrase searching module (Maximum of two operations per single search). Implementation Details