🇨🇴 Versión en español de este documento
Small/specialized AI models are an oft-necessary complement—or alternative—to "big AI" offerings. However, infrastructure for small AI tends to be underwhelming, so building with specialized AI can be difficult, time-consuming, and even expensive. Iterating with different models, and particularly with different combinations of these models, can thus be rendered unfeasible.
That's why we're here. Welcome to Krixik, where you can easily and swiftly experiment, prototype, and build with sequenced or single-standing small/specialized AI models through secure APIs. The models you leverage through Krixik can be either open source or trained/fine-tuned by you.
Krixik is currently in beta, so access to the Krixik Python client is by request only.
If you'd like to participate as a beta tester, please complete this brief Google form.
Run the following command to install the Krixik Python client:
pip install krixik
Note: Python version 3.8 or higher is required.
To initialize your Krixik client session you will need your unique api_key
and api_url
secrets. Beta testers will receive their secrets from Krixik admin.
Instead of handling your secrets directly, we strongly recommend storing them in an .env
file and loading them via python-dotenv.
Once you have your secrets, initialize your session as follows:
from krixik import krixik
krixik.init(api_key=MY_API_KEY,
api_url=MY_API_URL)
...where MY_API_KEY
and MY_API_URL
are your account secrets.
If you've misplaced your secrets, please reach out to us directly.
Let's build a simple transcription pipeline consisting of a single transcribe
module. We can create the pipeline with a single line of code:
# create a simple transcription pipeline
pipeline = krixik.create_pipeline(name='my_transcribe-pipeline-1',
module_chain=["transcribe"])
The pipeline is ready! Now you can process audio files through it to generate transcripts of them.
pipeline.process(local_file_path='./path/to/my/mp3')
The outputs of this pipeline will be a timestamped transcript of your input audio file, a file_id
for the processed file, and a request_id
for the process itself.
Suppose you wanted to perform semantic (a.k.a. vector) search on transcribe
module output.
You would need to do the following after transcription:
- Transform the transcript into a text file
- Parse the text using a sliding window, chunking it into (possibly overlapping) snippets
- Embed each snippet using an appropriate text embedder
- Store the resulting vectors in a vector database
- Index said database
- Enable vector (semantic) search on the database
Locally creating and testing this sequence of steps would be time consuming—orchestrating them in a secure production service even more so. And that's without trying to make the entire process serverless.
With Krixik, however, you can rapidly incorporate this functionality to your earlier pipeline by just adding a few modules. Syntax remains as above, so making the new pipeline still takes one line:
# create pipeline with the above-alluded-to modules
pipeline = krixik.create_pipeline(name='transcribe_vsearch',
module_chain=["transcribe",
"json-to-txt",
"parser",
"text-embedder",
"vector-db"])
Let's process a file through your new pipeline.
pipeline.process(local_file_path='./path/to/my/mp3')
Now that there is at least one file in the pipeline, you can use the file's file_id
—which was returned at the end of the above process—to perform semantic search on the associated transcript with the semantic_search
method:
pipeline.semantic_search(query="The text you wish to semantically search for goes here",
file_ids=['the_file_id_from_above'])
That's it! You have now transcribed a file, processed the transcript, performed semantic (vector) search on it, and can reuse the pipeline for as many files and queries as you like... all of it in a couple of minutes and with a few lines of code.
Optional: Pull the Krixik Docs Repo
If you wish to follow along with the above example, or with any other of the score of examples we lay out in the documentation, then simply pull the entire Krixik Docs repo.
Doing so will provide you with every file you need, and code will already be configured to run in that directory structure.
The range of examples we've documented for you include pipelines to:
- ...generate an image caption for a set of images and then perform keyword search on the caption set.
- ...transcribe a trove of audio files, translate them to English, and then run sentiment analysis on each one.
- ...easily and serverlessly consume your open-source OCR model of choice.
This is only the tip of the iceberg. Many more pipelines are currently possible (see here for more examples), and the Krixik module/model library will constantly be expanding—perhaps even to include modules and models of your own submission.
The above is just a peek at the power of Krixik. In addition to all possible parameterization (which we didn't even touch on), the Krixik toolbox is an ever-growing collection of modules and models for you to build with.
If you'd like to learn more, please visit Krixik Documentation, where we go into detail on:
Excited about Krixik graduating from beta? So are we! We're confident that this product is going to kick a monumental amount of ass, and we'd love to have you on board when it does.
If you wish to be in the loop about launch and other matters (we promise not to spam), please subscribe to occasional correspondence from us HERE.
Thanks for reading, and welcome to Krixik!