opea-project/GenAIExamples

[Bug]wrong TGI endpoint for ChatQnA testing

Closed this issue · 1 comments

Priority

P3-Medium

OS type

Ubuntu

Hardware type

Xeon-SPR

Installation method

  • Pull docker images from hub.docker.com
  • Build docker images from source

Deploy method

  • Docker compose
  • Docker
  • Kubernetes
  • Helm

Running nodes

Single Node

What's the version?

1.0

Description

The ChatQnA REAMDE TGI instruction below is not aligned with our LLM gen component.
https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/hpu/gaudi
image

LLM GenAI component implementation:
https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/tgi/llm.py#L41

Reproduce steps

just follow the README. it should still work for testing since TGI also has generate endpoint, but our microservice use v1/completion instead.

Raw log

No response

fixed with commit : intel-ai-tce@aa314f6