dustinblackman/oatmeal

Support perplexity

aemonge opened this issue · 5 comments

oatmeal --open-ai-token=$(cat ~/.ssh/perplexity-token ) --open-ai-url="https://api.perplexity.ai" --backend=openai

╭Oatmeal───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮                                                        
│ Hey, it looks like backend openai isn't running, I can't connect to it. You should double check that before we start talking, otherwise I may crash. │                                                           ▲
│                                                                                                                                                      │                                                           █
│ Error: OpenAI health check failed                                                                                                                    │                                                           █
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯                                                           █

First time I'm hearing about Perplexity! Do they claim to integrate an OpenAI compatible API? Looking at their docs it's very empty, and doesn't look like they support response streaming.

Yes, it's very knew to the market. But so far it has greater experience than GPT-4, since you can easily change from one to another LLM Backend.

I'm unaware of such claim, as you mentioned the docs are pretty simplistic.

This is all the documentation I could find. It looks like it is compatible with the OpenAI API, incluiding streaming (I haven't tested it yet)
https://docs.perplexity.ai/reference/post_chat_completions

It looks like it fails because because of two reasons:

  • The error you are getting is from the healthcheck. Just as OpenAI, there is no healthcheck on the root url. The current code is returning true if url == https://api.openai.com, this is a different url, the healthcheck fails.
  • If the healthcheck were to pass, the model shouldn't work because the OpenAI backend appends a /v1/ after the url and before /chat/completions which perplexity does not need.

The draft PR I have open for Github Copilot Chat shares pretty much the same code for OpenAI (and Perplexity by the looks of it) on the get_completions method; just the headers change (which are very important). It could be be appropriate to refactor this code into a single openai_compatible_completions method of sorts and each backend call this with their respective URL, Headers and data.

It could be be appropriate to refactor this code into a single openai_compatible_completions method of sorts and each backend call this with their respective URL, Headers and data.

This is has been my problem as I've looked at more OpenAI proxies and compatible API's. They claim to be compatible, but there's little nuances that are enough of a pain that attempting to create a helper function will eventually just blow up in to being a set of complex condition to setup the request. I'm more open to if they are similar, copy/paste the OpenAI backend infrastructure file, rename functions appropriately, and make the changes to work with the new API.

If you know of anyone at Perplexity who'd be open to providing a free account for a short period of time, I'd be happy to implement this. Otherwise I love PRs! :D

Ohh, I see.

Claiming and doing so, that's common in the industry, hehe.

I'll keep my ears open if I can find anyone from perplexity, but I'll focus on a PR when I get some extra time. Thanks!