/historical-docs-analysis

Code to go along with my "Solving Real World Data Science Problems with LLMs (Historical Doc Analysis)" video.

Primary LanguageJupyter NotebookMIT LicenseMIT

Historical Document Analysis

the thumbnail image of the youtube video corresponding to this repo

Code to go along with my "Solving Real World Data Science Problems with LLMs (Historical Doc Analysis)" video.

You can watch the video by clicking this link

Setup

Most of the setup details can be found by watching through the video, but here are some specific links that may be helpful.

User secrets in Kaggle: https://www.kaggle.com/discussions/product-feedback/114053
Set up OpenAI API: https://platform.openai.com/docs/quickstart?context=python
Set up Ollama: https://github.com/ollama/ollama

Data

The data comes from the Bureau of Refugees, Freedmen, & Abandoned Lands (also known simply as the Freedmen's Bureau), which was established by the United States government after the Civil War in 1865 to help support formerly enslaved individuals with greater access to job opportunities, education, and other resources.

Hundreds of thousands of written documents have been digitally transcribed by volunteers working with the Smithsonian Institute. A subset of these documents can be found on Kaggle for use in NLP analysis.

Here is a link to the dataset: kaggle.com/datasets/keithgalli/freedmens-bureau-historical-documents.

If you enjoy working on this project, it would mean a lot if you could upvote the dataset on Kaggle to help more people find the project!