/PolyAI

An open-source Swift package for interacting with different LLM's public API's, using the OpenAI format.

Primary LanguageSwift

PolyAI

dragon

iOS 15+ MIT license swift-version swiftui-version xcode-version swift-package-manager

An open-source Swift package that simplifies LLM message completions, inspired by liteLLM and adapted for Swift developers, following Swift conventions.

Description

Call different LLM APIs using the OpenAI format; currently supporting OpenAI, Anthropic, Gemini.

Also call any local model using Ollama OpenAI compatibility endopints. You can use models like llama3 or mistral.

Table of Contents

Installation

Swift Package Manager

  1. Open your Swift project in Xcode.
  2. Go to File -> Add Package Dependency.
  3. In the search bar, enter this URL.
  4. Choose the version you'd like to install.
  5. Click Add Package.

Important

⚠️ Please take precautions to keep your API keys secure.

Remember that your API keys are a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your backend server where your API keys can be securely loaded from an environment variable or key management service.

Functionalities

  • Chat completions
  • Chat completions with stream
  • Tool use
  • Image as input

Usage

To interface with different LLMs, you need only to supply the corresponding LLM configuration and adjust the parameters accordingly.

First, import the PolyAI package:

import PolyAI

Then, define the LLM configurations. Currently, OpenAI, Anthropic and Gemini are supported, you can also use Ollama or any provider that provides local models with OpenAI endpoint compatibilities to use local models such llama3 or Mistral.

let openAIConfiguration: LLMConfiguration = .openAI(.api(key: "your_openai_api_key_here"))
let anthropicConfiguration: LLMConfiguration = .anthropic(apiKey: "your_anthropic_api_key_here")
let geminiConfiguration: LLMConfiguration = .gemini(apiKey: "your_gemini_api_key_here")
let ollamaConfiguration: LLMConfiguration = .ollama(url: "http://localhost:11434")

let configurations = [openAIConfiguration, anthropicConfiguration, geminiConfiguration, ollamaConfiguration]

With the configurations set, initialize the service:

let service = PolyAIServiceFactory.serviceWith(configurations)

Now, you have access to OpenAI, Anthropic, Gemini, llama3, Mistral models in a single package. 🚀

Message

To send a message using OpenAI:

let prompt = "How are you today?"
let parameters: LLMParameter = .openAI(model: .gpt4turbo, messages: [.init(role: .user, content: prompt)])
let stream = try await service.streamMessage(parameters)

To interact with Anthropic instead, all you need to do is change just one line of code! 🔥

let prompt = "How are you today?"
let parameters: LLMParameter = .anthropic(model: .claude3Sonnet, messages: [.init(role: .user, content: prompt)], maxTokens: 1024)
let stream = try await service.streamMessage(parameters)

To interact with Gemini instead, all you need to do (again) is change just one line of code! 🔥

let prompt = "How are you today?"
let parameters: LLMParameter = .gemini(model: ""gemini-1.5-pro-latest", messages: [.init(role: .user, content: prompt)], maxTokens: 2000)
let stream = try await service.streamMessage(parameters)

To interact with local models using Ollama, all you need to do(again) is change just one line of code! 🔥

let prompt = "How are you today?"
let parameters: LLMParameter = .ollama(model: "llama3", messages: [.init(role: .user, content: prompt)], maxTokens: 2000)
let stream = try await service.streamMessage(parameters)

OpenAI Azure

To access the OpenAI API via Azure, you can use the following configuration setup.

let azureConfiguration: LLMConfiguration = .openAI(.azure(configuration: .init(resourceName: "YOUR_RESOURCE_NAME", openAIAPIKey: .apiKey("YOUR_API_KEY"), apiVersion: "THE_API_VERSION")))

More information can be found here.

OpenAI AIProxy

To access the OpenAI API via AIProxy, use the following configuration setup.

let aiProxyConfiguration: LLMConfiguration = .openAI(.aiProxy(aiproxyPartialKey: "hardcode_partial_key_here", aiproxyDeviceCheckBypass: "hardcode_device_check_bypass_here"))

More information can be found here.

Ollama

To interact with local models using Ollama OpenAI compatibility endpoints, use the following configuration setup.

1 - Download Ollama if yo don't have it installed already. 2 - Download the model you need, e.g for llama3 type in terminal:

ollama pull llama3

Once you have the model installed locally you are ready to use PolyAI!

let ollamaConfiguration: LLMConfiguration = .ollama(url: "http://localhost:11434")

More information can be found here.

Collaboration

Open a PR for any proposed change pointing it to main branch.