Snorkle is a 100% local, private document search tool. It enables you to run deep, private searches across hundreds of pages per minute to get relevant context for your queries. Snorkle can run on any backend LLM server, using text-gen-webui by default.
Snorkle is a fork of Patense.local, a document analysis tool for patent attorneys, with a modified system prompt for general searching.
It basically breaks your references up into pages, passes each page to an LLM with the query, and asks if the content is relevant to the query. If it's relevant, it displays a short quote with a link to the full page.
- Privacy First: Run the tool entirely on your local machine, ensuring full control over your data.
- High Performance: Search and analyze large documents quickly and efficiently.
- Flexible Backend: While
text-gen-webui
is the default, Snorkle.local can work with any backend LLM server.
- text-gen-webui (installation is outside the scope of this guide).
- Node.js and npm (These are necessary to run the application. If you're unfamiliar with installing them, it might be easier to use Patense.ai).
-
Clone the Repository
git clone https://github.com/JohnZolton/snorkle.git cd snorkle
-
Install Dependencies
npm install
2.1 Rename .env.example to .env
-
Configure the Backend
Start your backend LLM server in api mode
in your text-gen-webui folder (or other backend) run:
Linux ./start_linux.sh --listen --api Windows ./start_windows.bat --listen --api Mac ./start_macos.sh --listen --api
In text-gen-webui
, select and load your model (8B tier is quite fast, at about 0.5-1 second per page on a 3090)
- Initialize the database
in the /snorkle folder, run:
npm run db:push
- Run the Application
in the /snorkle folder, run:
npm run dev
- Naviage to http://localhost:3000
Once the application is running, you can begin uploading documents and performing searches.