enricoros/big-AGI

Anthropic API

Closed this issue · 2 comments

It is similar to OPENAI

        conversations = conversations.concat(`Human: ${text}\n\nAssistant: `);
        var response: ClaudeResponse = await fetcher(URL.Completion, {
            method: 'POST',
            headers: {
                "content-type": "application/json",
                "x-api-key": this.key,
            },
            body: {
                "prompt": conversations,
                "model": options ? options.model : 'claude-v1',
                "max_tokens_to_sample": (options) ? (options.max_tokens_to_sample ? options.max_tokens_to_sample : 512) : 512,
                "stream": options?.stream,
                "stop_sequence": options?.stop_sequences,
                "temperature": options?.temperature,
                "top_k": options?.top_k,
                "top_p": options?.top_p,
            }
        });
        if (!response) throw "Error parsing response body";
        conversations = conversations + response.completion + '\n\n';
        await fsAsync.writeFile(`data/${conversationId}.txt`, conversations, { encoding: 'utf-8' });
        return {
            response: response.completion,
            conversationId: conversationId,
        };
    }

I have access to anthropic.com and I have a code we can test with. Please let me know what you think?

https://console.anthropic.com/docs

We should absolutely support multiple models by multiple vendors in the UI.
Thanks for the request, this is definitely a high priority feature.
I don't have a way to test it tho, so to be able to write the fetcher (especially the streaming part, to show the text as it comes) it will take a while. I've just applied to get access.

Closed in 9666e58