How can I use multiple models of openai in the same application?
Camelxxl opened this issue · 10 comments
I want to have one more simple approach with gpt 4o and one with 4, but I can't get it running.
I tried somethlink like:
quarkus.langchain4j.m1.chat-model.provider=openai
quarkus.langchain4j.openai.m1.chat-model.model-name=gpt-4o
quarkus.langchain4j.openai.m1.chat-model.temperature=0.5
quarkus.langchain4j.openai.m1.chat-model.max-tokens=2300
quarkus.langchain4j.openai.m1.timeout=60s
quarkus.langchain4j.openai.m1.log-requests=true
quarkus.langchain4j.openai.m1.log-responses=true
quarkus.langchain4j.m2.chat-model.provider=openai
quarkus.langchain4j.openai.m2.chat-model.model-name=gpt-4
quarkus.langchain4j.openai.m2.chat-model.temperature=0.5
quarkus.langchain4j.openai.m2.chat-model.max-tokens=2300
quarkus.langchain4j.openai.timeout=60s
quarkus.langchain4j.openai.log-requests=true
quarkus.langchain4j.openai.log-responses=true
but m2 is never working (yes I configured the api key)
I also tried to change the model via configuration changes at runtime, but its not working as well.
How can I accomplish that?
but m2 is never working
What do you mean exactly? How are you using it?
I am using it the same way I use m1 and get an error:
I have two services with
@RegisterAiService(modelName="m1")
@RegisterAiService(modelName="m2")
with exact same methods
the request post to openai log is printed and I get following error
2024-06-21 15:54:40,401 ERROR [io.qua.lan.run.ais.AiServiceMethodImplementationSupport] (ForkJoinPool.commonPool-worker-1) Execution method failed: java.lang.RuntimeException: jakarta.ws.rs.ProcessingException: The timeout period of 10000ms has been exceeded while executing POST /v1/chat/completions for server null
Can you please attach a sample application I can try next week?
application.properties:
quarkus.langchain4j.openai.m1.api-key=sk-....
quarkus.langchain4j.openai.m2.api-key=sk-....
quarkus.langchain4j.m1.chat-model.provider=openai
quarkus.langchain4j.openai.m1.chat-model.model-name=gpt-4o
quarkus.langchain4j.openai.m1.chat-model.temperature=0.5
quarkus.langchain4j.openai.m1.chat-model.max-tokens=2300
quarkus.langchain4j.openai.m1.timeout=60s
quarkus.langchain4j.openai.m1.log-requests=true
quarkus.langchain4j.openai.m1.log-responses=true
quarkus.langchain4j.m2.chat-model.provider=openai
quarkus.langchain4j.openai.m2.chat-model.model-name=gpt-4
quarkus.langchain4j.openai.m2.chat-model.temperature=0.5
quarkus.langchain4j.openai.m2.chat-model.max-tokens=2300
quarkus.langchain4j.openai.timeout=60s
quarkus.langchain4j.openai.log-requests=true
quarkus.langchain4j.openai.log-responses=true
@RegisterAiService(modelName="m1")
public interface AiServiceM1 {
@SystemMessage("you are a chatbot")
@UserMessage("""
Data:
{data}
""")
String createAnswer(String prompt);
}
@RegisterAiService(modelName="m2")
public interface AiServiceM2 {
@SystemMessage("you are a chatbot")
@UserMessage("""
Data:
{data}
""")
String createAnswer(String prompt);
}
and then just try to use both, one will work the other not
Thanks, I'll give it a shot on Tuesday
I can't reproduce the problem.
can you use both models without a problem?
Yes
can you provide me your example that would be very helpful for me
I deleted it, but it essentially did what you are doing