⚠WARNING: Prototype with unstable API. 🚧
This is a half-baked prototype that "helps" you extract structured data from text using LLMs 🧩.
Specify the schema of what should be extracted and provide some examples.
Kor will generate a prompt, send it to the specified LLM and parse out the output. You might even get results back.
from kor.extraction import Extractor
from kor.nodes import Object, Text
from kor.llms import OpenAIChatCompletion
llm = OpenAIChatCompletion(model="gpt-3.5-turbo")
model = Extractor(llm)
schema = Object(
id="player",
description=(
"User is controling a music player to select songs, pause or start them or play"
" music by a particular artist."
),
attributes=[
Text(id="song", description="User wants to play this song", examples=[]),
Text(id="album", description="User wants to play this album", examples=[]),
Text(
id="artist",
description="Music by the given artist",
examples=[("Songs by paul simon", "paul simon")],
),
Text(
id="action",
description="Action to take one of: `play`, `stop`, `next`, `previous`.",
examples=[
("Please stop the music", "stop"),
("play something", "play"),
("next song", "next"),
],
),
],
)
model("can you play all the songs from paul simon and led zepplin", schema)
{'player': [{'artist': ['paul simon', 'led zepplin']}]}
See documentation.
At the moment, Kor
only works with python 3.10+.
pip install kor
Ideas of some things that could be done with Kor.
- Extract data from text: Define what information should be extracted from a segment
- Convert an HTML form into a Kor form and allow the user to fill it out using natural language. (Convert HTML forms -> API? Or not.)
- Add some skills to an AI assistant
This a prototype and the API is not expected to be stable as it hasn't been tested against real world examples.
- Making mistakes! Plenty of them. Quality varies with the underlying language model, the quality of the prompt, and the number of bugs in the adapter code.
- Slow! It uses large prompts with examples, and works best with the larger slower LLMs.
- Crashing for long enough pieces of text! Context length window could become limiting when working with large forms or long text inputs.
- Incorrectly grouping results (see documentation section on objects).
- Adding validators
- Built-in components to quickly assemble schema with examples
- Add routing layer to select appropriate extraction schema for a use case when many schema exist
Fast to type and sufficiently unique.
Probabilistically speaking this package is unlikely to work for your use case.
So here are some great alternatives: