An unofficial Rust client for the OpenAI API.
Run the following Cargo command in your project directory:
cargo add oaapi
or add the following line to your Cargo.toml:
[dependencies]
oaapi = "0.2.0"
Note
You need to enable the feature flags to use the corresponding APIs.
Beta version APIs:
- Enable API feature flags that you want to use, e.g.
chat
. - Create a [
crate::Client
] with the API key and the other optional settings. - Use the client to call the APIs, e.g. [
crate::Client::chat_complete
].
An example to call the chat completions API with the chat
feature:
[dependencies]
oaapi = { version = "0.2.0", features = ["chat"] }
and setting the API key to the environment variable: OPENAI_API_KEY
OPENAI_API_KEY={your-openai-api-key}
is as follows:
use oaapi::Client;
use oaapi::chat::CompletionsRequestBody;
use oaapi::chat::SystemMessage;
use oaapi::chat::UserMessage;
use oaapi::chat::ChatModel;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// 1. Create a client with the API key from the environment variable: "OPENAI_API_KEY"
let client = Client::from_env()?;
// or specify the API key directly.
// let client = Client::new(oaapi::ApiKey::new("OPENAI_API_KEY"), None, None);
// 2. Create a request body parameters.
let request_body = CompletionsRequestBody {
messages: vec![
SystemMessage::new("Prompt.", None).into(),
UserMessage::new("Chat message from user.".into(), None).into(),
],
model: ChatModel::Gpt35Turbo,
..Default::default()
};
// 3. Call the API.
let response = client
.chat_complete(request_body)
.await?;
// 4. Use the response.
println!("Result:\n{}", response);
Ok(())
}
See also examples in documents of each feature module for more details.
See the ./examples directory.
See CHANGELOG.
Licensed under either of the Apache License, Version 2.0 or the MIT license at your option.