openchatai/OpenChat

Error sending the message (when trying to chat)

gururise opened this issue · 3 comments

DJANGO BACKEND PROBLEM

I am running the latest django backend locally (vscode debug) and I am able to login, ingest a website and do everything else; however, when I go to chat with the chatbot, I get "Error sending the message"

image

In the terminal where django server is running, I get this error:

Warning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
  warnings.warn(
/home/gene/Downloads/OpenChat/dj_backend_server/venv/lib/python3.11/site-packages/langchain/llms/openai.py:801: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
  warnings.warn(
memory=None callbacks=None callback_manager=None verbose=False tags=None metadata=None combine_docs_chain=StuffDocumentsChain(memory=None, callbacks=None, callback_manager=None, verbose=True, tags=None, metadata=None, input_key='input_documents', output_key='output_text', llm_chain=LLMChain(memory=None, callbacks=None, callback_manager=None, verbose=True, tags=None, metadata=None, prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="Use the following pieces of context to answer the question at the end. \n    If you don't know the answer, just say that you don't know, don't try to make up an answer. \n    Use three sentences maximum and keep the answer as concise as possible. \n    {context}\n    Question: {question}\n    Helpful Answer:", template_format='f-string', validate_template=True), llm=OpenAIChat(cache=None, verbose=False, callbacks=None, callback_manager=None, tags=None, metadata=None, client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, model_name='gpt-3.5-turbo', model_kwargs={'temperature': 0.2}, openai_api_key='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', openai_api_base=None, openai_proxy=None, max_retries=6, prefix_messages=[], streaming=False, allowed_special=set(), disallowed_special='all'), output_key='text', output_parser=StrOutputParser(), return_final_only=True, llm_kwargs={}), document_prompt=PromptTemplate(input_variables=['page_content'], output_parser=None, partial_variables={}, template='{page_content}', template_format='f-string', validate_template=True), document_variable_name='context', document_separator='\n\n') question_generator=LLMChain(memory=None, callbacks=None, callback_manager=None, verbose=True, tags=None, metadata=None, prompt=PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), llm=OpenAIChat(cache=None, verbose=False, callbacks=None, callback_manager=None, tags=None, metadata=None, client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, model_name='gpt-3.5-turbo', model_kwargs={'temperature': 0.2}, openai_api_key='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', openai_api_base=None, openai_proxy=None, max_retries=6, prefix_messages=[], streaming=False, allowed_special=set(), disallowed_special='all'), output_key='text', output_parser=StrOutputParser(), return_final_only=True, llm_kwargs={}) output_key='answer' rephrase_question=True return_source_documents=False return_generated_question=False get_chat_history=None retriever=VectorStoreRetriever(tags=['Qdrant', 'OpenAIEmbeddings'], metadata=None, vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7f979ea2ced0>, search_type='similarity', search_kwargs={}) max_tokens_limit=None


> Entering new StuffDocumentsChain chain...


> Entering new LLMChain chain...
Prompt after formatting:
Use the following pieces of context to answer the question at the end. 
    If you don't know the answer, just say that you don't know, don't try to make up an answer. 
    Use three sentences maximum and keep the answer as concise as possible. 
    to everyone. We're here to provide you with the products and services you need to build your business, and also help redefine what you know about DTF. Information Distributor Sign Up Terms of Service Privacy Policy Printhead Warranty & Return Policy Instagram YouTube © 2023, DTF Station Choosing a selection results in a full page refresh. Opens in a new window.

Distributor Sign Up Terms of Service Privacy Policy Printhead Warranty & Return Policy Instagram YouTube © 2023, DTF Station Choosing a selection results in a full page refresh. Opens in a new window.

components, we strongly advise seeking the expertise of a trained technician to execute the installation accurately. Ensuring proper installation not only safeguards the warranty coverage but also contributes to the optimal performance of the printhead and equipment. Our mission Welcome to DTF Station.DTF Station was created in the hopes to make Direct to Film Printing more accessible to everyone. We're here to provide you with the products and services you need to build your business, and also help redefine what you know about DTF. Information Distributor Sign Up Terms of Service Privacy Policy Printhead Warranty & Return Policy Instagram YouTube © 2023, DTF Station Choosing a selection results in a full page refresh. Opens in a new window.

build your business, and also help redefine what you know about DTF. Information Distributor Sign Up Terms of Service Privacy Policy Printhead Warranty & Return Policy Instagram YouTube © 2023, DTF Station Choosing a selection results in a full page refresh. Opens in a new window.
    Question: hello
    Helpful Answer:
[11/Nov/2023 19:18:06] "GET /widget/data-sources-updates/9d90f84d-6238-4a73-aafb-97f098a79865/ HTTP/1.1" 200 14

> Finished chain.

> Finished chain.
Internal Server Error: /api/chat/
'text'
[11/Nov/2023 19:18:06] "POST /api/chat/ HTTP/1.1" 500 41
Traceback (most recent call last):
  File "/home/gene/Downloads/OpenChat/dj_backend_server/api/views/views_message.py", line 153, in send_chat
    "text": bot_response.get_bot_reply()
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gene/Downloads/OpenChat/dj_backend_server/api/views/views_message.py", line 17, in get_bot_reply
    return self.response['text']
           ~~~~~~~~~~~~~^^^^^^^^
KeyError: 'text'
Internal Server Error: /api/chat/send/
[11/Nov/2023 19:18:06] "POST /api/chat/send/ HTTP/1.1" 500 119
[11/Nov/2023 19:18:08] "GET /widget/data-sources-updates/9d90f84d-6238-4a73

When I view at the response send back on line 17 of view_message.py:
response: {'error': 'Unexpected response from API'}

Here is my .env file:

# "azure" | "openai" | llama2
OPENAI_API_TYPE=openai
OPENAI_API_MODEL=gpt-3.5-turbo
OPENAI_API_TEMPERATURE=0.2

# If using azure
# AZURE_OPENAI_API_BASE=
# AZURE_OPENAI_API_KEY=
# AZURE_OPENAI_API_VERSION=2023-03-15-preview
# AZURE_OPENAI_EMBEDDING_MODEL_NAME=
# AZURE_OPENAI_DEPLOYMENT_NAME=
# AZURE_OPENAI_COMPLETION_MODEL=gpt-35-turbo

# For openai
OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

# "azure" | "openai" | llama2
EMBEDDING_PROVIDER=openai

# Vector Store, PINECONE|QDRANT
STORE=QDRANT

# if using pinecone
# PINECONE_API_KEY=
# PINECONE_ENV=
# VECTOR_STORE_INDEX_NAME=

# if using qdrant
QDRANT_URL=http://localhost:6333

# optional, defaults to 15
MAX_PAGES_CRAWL=15

# --- these will change if you decide to start testing the software
CELERY_BROKER_URL=redis://localhost:6379/0
CELERY_RESULT_BACKEND=redis://localhost:6379/0
DATABASE_NAME=openchat
DATABASE_USER=dbuser
DATABASE_PASSWORD=dbpass
DATABASE_HOST=localhost
DATABASE_PORT=3307

# retrieval_qa | conversation_retrieval, retrieval_qa works better with azure openai

# Add Allowed Hosts here, no quote, just IP or domain, separated by a comma
ALLOWED_HOSTS=localhost,0.0.0.0,127.0.0.1
APP_URL=http://localhost:8000

# use 'external' if you want to use below services.
PDF_LIBRARY = 'internal'

#PDF API - OCRWebService.com (REST API). https://www.ocrwebservice.com/api/restguide
#Extract text from scanned images and PDF documents and convert into editable formats.
#Please create new account with ocrwebservice.com via http://www.ocrwebservice.com/account/signup and get license code
OCR_LICCODE = 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX'
OCR_USERNAME =  'username'
OCR_LANGUAGE = 'english'
# Advantage to clean up the OCR text which can be messy and full with garbage, but will generate a cost with LLM if is paid. Use carefully.
# Use 1 to enable, 0 to disable.
OCR_LLM = 0

# Replace in Chat JS and Search JS english language strings with these (use your own language)
LNG_BOT='Bot is Thinking...'
LNG_ERROR='Error sending the message.'
LNG_WRITE='Ask a question...'
LNG_ASK='Write a reply...'

Can you try the following @gururise? Save this to a file called testing_api.py and then run that file using python3 command;
(note the ' around the key).

import openai

openai.api_key = 'your-key-goes-here'

def is_api_key_valid():
    try:
        response = openai.Completion.create(
            engine="davinci",
            prompt="This is a test.",
            max_tokens=5
        )
    except:
        return False
    else:
        return True

# Check the validity of the API key
api_key_valid = is_api_key_valid()
print("API key is valid:", api_key_valid)

If this returns the correct response:

python3 testing_api.py 
API key is valid: True

Then try and make sure that your API config is using the ' around the key i.e. your .env.docker file must have a quotation around the key:

OPENAI_API_KEY= 'your-key-goes-here'

The issue is not the key. Seems something has changed on OpenAI part, I have the same error. I woorking to figure out how can be fixed.

Get the latest version an try it please.