Inquiry about support for custom HTTP Client of Hugging Face API
kv-chiu opened this issue · 7 comments
Background:
Currently, in the doRequest
function of the HuggingFace
, HTTP requests are made using the http.DefaultClient
. While this works for most scenarios, there is a need for more flexibility when it comes to customizing the behavior of the HTTP client.
Request:
I would like to request an enhancement that allows users to specify their own HTTP client when making requests through the library. This feature would provide users with the ability to configure custom settings for the HTTP client, such as timeouts, custom transport options, or any other client-specific configurations.
Proposed Implementation:
One possible implementation approach could involve modifying the doRequest function to accept an http.Client as an argument. This change would allow users to pass their own pre-configured HTTP client when making requests, as follows:
func (h *HuggingFace) doRequest(ctx context.Context, jsonBody []byte, model string, httpClient *http.Client) ([]byte, error) {
// Use the provided httpClient for making the request.
// ...
}
By making this modification, users would have the flexibility to tailor the HTTP client to their specific requirements.
I'm trying to use the huggingface
library for conversational completion, but I'm encountering an issue when running the following code:
package main
import (
"context"
"fmt"
"hf_api_experiment/proxy"
"github.com/henomis/lingoose/llm/huggingface"
)
func main() {
proxy.InitHttpClient() // My custom proxy HTTP client
llmAI := huggingface.New("meta-llama/Llama-2-70b-chat-hf", 1, false).WithToken("***").WithHTTPClient(proxy.ProxyHttpClient)
response, err := llmAI.Completion(context.Background(), "Hello AI. How are you?")
if err != nil {
panic(err)
}
fmt.Println(response)
}
When running the above code, I encounter the following error message:
panic: huggingface completion error: invalid character 'F' looking for beginning of value
My Environment Information:
- Go Version: 1.20
- Library Version: github.com/henomis/lingoose@main
- Operating System: windows11
This model is a text-generation
task so you should add WithMode(huggingface.HuggingFaceModeTextGeneration)
to your code.
This model is a
text-generation
task so you should addWithMode(huggingface.HuggingFaceModeTextGeneration)
to your code.
Thank you very much, but I am still experiencing the original problem after adding WithMode(huggingface.HuggingFaceModeTextGeneration)
. However, after I changed the model to gpt2
I got the response I should have gotten, but the verbose
variable was not working.
My code:
func main() {
proxy.InitHttpClient() // my custom proxy http client
llmAI := huggingface.New("gpt2", 1, false)
llmAI = llmAI.WithToken("***")
llmAI = llmAI.WithHTTPClient(proxy.ProxyHttpClient)
llmAI = llmAI.WithVerbose(false)
llmAI = llmAI.WithMode(huggingface.HuggingFaceModeTextGeneration)
response, err := llmAI.Completion(context.Background(), "Hello AI. How are you?")
if err != nil {
panic(err)
}
fmt.Println(response)
}
Response:
---USER---
Hello AI. How are you?
---AI---
What's your name? When did you first become famous? Where is your place? What do you do? We'll know when you get here.
You'll know more about this conversation in the next episode
What's your name? When did you first become famous? Where is your place? What do you do? We'll know when you get here.
You'll know more about this conversation in the next episode
I'm already on a pro account and I'm not sure if there's a limit to the models I can use. Still, thank you very much. :)