This project aims to tackle the challenges of data extraction and processing using OCR and LLM. It is inspired by JP Morgan's DocLLM but is fully open-source and offers a larger context window size. The project is divided into two parts: the OCR and LLM layer.
The OCR layer is responsible for reading all the content from a document. It involves the following steps:
-
Convert pages to images: Any type of file is converted into an image so that all the content in the document can be read.
-
Preprocess image for OCR: The image is adjusted to improve its quality and readability.
-
Tesseract OCR: The Tesseract OCR, the most popular open-source OCR in the world, is used to read the content from the images.
The LLM layer is responsible for extracting specific content from the document in a structured way. It involves defining an extraction contract and extracting the JSON data.
You can run the models on-premises using LLM studio or Ollama. This project uses LlamaIndex and Ollama.
The repo includes a FastAPI app with one endpoint for testing. Make sure to point to the proper Tesseract executable and change the key in the config.py file.
-
Install Tessaract https://github.com/tesseract-ocr/tesseract
-
Install the required Python packages.
pip install -r requirements.txt
- Run fast api
uvicorn main:app --reload
- go to the Swgger page: http://localhost:8000/docs
- Build the Docker image.
docker build -t your-image-name .
- Run the Docker container.
docker run -p 8000:8000 your-image-name
- go to the Swgger page: http://localhost:8000/docs
The project also explores advanced cases like a 1 million token context using LLM Lingua and Mistral Yarn 128k context window.
The integration of OCR and LLM technologies in this project marks a pivotal advancement in analyzing unstructured data. The combination of open-source projects like Tesseract and Mistral makes a perfect implementation that could be used in an on-premise use case.