OmniAI is a flexible AI library that standardizes the APIs of many different AIs:
All libraries are community maintained.
gem install omniai
gem install omniai-anthropic
gem install omniai-mistral
gem install omniai-google
gem install omniai-openai
OmniAI implements APIs for a number of popular clients by default. A client can be initialized using the specific gem (e.g. omniai-openai
for OmniAI::OpenAI
). Vendor specific docs can be found within each repo.
require 'omniai/anthropic'
client = OmniAI::Anthropic::Client.new
require 'omniai/google'
client = OmniAI::Google::Client.new
require 'omniai/mistral'
client = OmniAI::Mistral::Client.new
require 'omniai/openai'
client = OmniAI::OpenAI::Client.new
LocalAI support is offered through OmniAI::OpenAI:
Ollama support is offered through OmniAI::OpenAI:
Logging the request / response is configurable by passing a logger into any client:
require 'omniai/openai'
require 'logger'
logger = Logger.new(STDOUT)
client = OmniAI::Example::Client.new(logger:)
[INFO]: POST https://...
[INFO]: 200 OK
...
Timeouts are configurable by passing a timeout
an integer duration for the request / response of any APIs using:
require 'omniai/openai'
require 'logger'
logger = Logger.new(STDOUT)
client = OmniAI::OpenAI::Client.new(timeout: 8) # i.e. 8 seconds
Timeouts are also be configurable by passing a timeout
hash with timeout
/ read
/ write
/ `keys using:
require 'omniai/openai'
require 'logger'
logger = Logger.new(STDOUT)
client = OmniAI::OpenAI::Client.new(timeout: {
read: 2, # i.e. 2 seconds
write: 3, # i.e. 3 seconds
connect: 4, # i.e. 4 seconds
})
Clients that support chat (e.g. Anthropic w/ "Claude", Google w/ "Gemini", Mistral w/ "LeChat", OpenAI w/ "ChatGPT", etc) generate completions using the following calls:
Generating a completion is as simple as sending in the text:
completion = client.chat('Tell me a joke.')
completion.text # 'Why don't scientists trust atoms? They make up everything!'
More complex completions are generated using a block w/ various system / user messages:
completion = client.chat do |prompt|
prompt.system 'You are a helpful assistant with an expertise in animals.'
prompt.user do |message|
message.text 'What animals are in the attached photos?'
message.url('https://.../cat.jpeg', "image/jpeg")
message.url('https://.../dog.jpeg', "image/jpeg")
message.file('./hamster.jpeg', "image/jpeg")
end
end
completion.text # 'They are photos of a cat, a cat, and a hamster.'
A real-time stream of messages can be generated by passing in a proc:
stream = proc do |chunk|
print(chunk.text) # '...'
end
client.chat('Tell me a joke.', stream:)
The above code can also be supplied any IO (e.g. File
, $stdout
, $stdin
, etc):
client.chat('Tell me a story', stream: $stdout)
A chat can also be initialized with tools:
tool = OmniAI::Tool.new(
proc { |location:, unit: 'celsius'| "#{rand(20..50)}° #{unit} in #{location}" },
name: 'Weather',
description: 'Lookup the weather in a location',
parameters: OmniAI::Tool::Parameters.new(
properties: {
location: OmniAI::Tool::Property.string(description: 'e.g. Toronto'),
unit: OmniAI::Tool::Property.string(enum: %w[celcius fahrenheit]),
},
required: %i[location]
)
)
client.chat('What is the weather in "London" in celcius and "Paris" in fahrenheit?', tools: [tool])
Clients that support transcribe (e.g. OpenAI w/ "Whisper") convert recordings to text via the following calls:
transcription = client.transcribe("example.ogg")
transcription.text # '...'
File.open("example.ogg", "rb") do |file|
transcription = client.transcribe(file)
transcription.text # '...'
end
Clients that support speak (e.g. OpenAI w/ "Whisper") convert text to recordings via the following calls:
File.open('example.ogg', 'wb') do |file|
client.speak('The quick brown fox jumps over a lazy dog.', voice: 'HAL') do |chunk|
file << chunk
end
end
tempfile = client.speak('The quick brown fox jumps over a lazy dog.', voice: 'HAL')
tempfile.close
tempfile.unlink
Clients that support generating embeddings (e.g. OpenAI, Mistral, etc.) convert text to embeddings via the following:
response = client.embed('The quick brown fox jumps over a lazy dog')
response.usage # <OmniAI::Embed::Usage prompt_tokens=5 total_tokens=5>
response.embedding # [0.1, 0.2, ...] >
Batches of text can also be converted to embeddings via the following:
response = client.embed([
'',
'',
])
response.usage # <OmniAI::Embed::Usage prompt_tokens=5 total_tokens=5>
response.embeddings.each do |embedding|
embedding # [0.1, 0.2, ...]
end
OmniAI packages a basic command line interface (CLI) to allow for exploration of various APIs. A detailed CLI documentation can be found via help:
omniai --help
omniai chat "What is the coldest place on earth?"
The coldest place on earth is Antarctica.
omniai chat --provider="openai" --model="gpt-4" --temperature="0.5"
Type 'exit' or 'quit' to abort.
# What is the warmet place on earth?
The warmest place on earth is Africa.
omniai embed "The quick brown fox jumps over a lazy dog."
0.0
...
omniai embed --provider="openai" --model="text-embedding-ada-002"
Type 'exit' or 'quit' to abort.
# Whe quick brown fox jumps over a lazy dog.
0.0
...