rlancemartin/auto-evaluator

Is there a limit on the size of the PDF? A PDF with a size of 200kb reported an error

codehelen opened this issue · 1 comments

Traceback (most recent call last):
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "/Users/helen/code/xiaomi/auto-evaluator/auto-evaluator.py", line 430, in
graded_answers, graded_retrieval, latency, predictions = run_evaluation(qa_chain, retriever, eval_set, grade_prompt,
File "/Users/helen/code/xiaomi/auto-evaluator/auto-evaluator.py", line 342, in run_evaluation
retrieval_grade = grade_model_retrieval(gt_dataset, retrieved_docs, grade_prompt)
File "/Users/helen/code/xiaomi/auto-evaluator/auto-evaluator.py", line 277, in grade_model_retrieval
graded_outputs = eval_chain.evaluate(
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/langchain/evaluation/qa/eval_chain.py", line 60, in evaluate
return self.apply(inputs)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/langchain/chat_models/base.py", line 82, in generate_prompt
raise e
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/langchain/chat_models/base.py", line 79, in generate_prompt
output = self.generate(prompt_messages, stop=stop)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/langchain/chat_models/base.py", line 54, in generate
results = [self._generate(m, stop=stop) for m in messages]
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/langchain/chat_models/base.py", line 54, in
results = [self._generate(m, stop=stop) for m in messages]
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/langchain/chat_models/openai.py", line 266, in _generate
response = self.completion_with_retry(messages=message_dicts, **params)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/langchain/chat_models/openai.py", line 228, in completion_with_retry
return _completion_with_retry(**kwargs)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/tenacity/init.py", line 314, in iter
return fut.result()
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/langchain/chat_models/openai.py", line 226, in _completion_with_retry
return self.client.create(**kwargs)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/Users/helen/code/xiaomi/auto-evaluator/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4280 tokens. Please reduce the length of the messages.

This is a problem w/ the context window of the model.

GPT3.5 currently only allows a context window of 4097 tokens.

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4280 tokens. Please reduce the length of the messages.