Link-AGI/AutoAgents

TypeError: 'async for' requires an object with __aiter__ method, got generator

Opened this issue · 9 comments

not able to run autoagents, fresh install.

python main.py --mode commandline --llm_api_key sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx --idea "Is LK-99 really a room temperature superconducting material?"
2023-10-15 13:37:44.757 | INFO | autoagents.system.config:init:43 - Config loading done.
SerpAPI key:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
2023-10-15 13:37:51.314 | INFO | autoagents.explorer:invest:33 - Investment: $10.0.
Traceback (most recent call last):
File "/home/aitoofaan/llms/autoagents/AutoAgents/main.py", line 56, in
asyncio.run(commanline(proxy=proxy, llm_api_key=args.llm_api_key, serpapi_key=args.serpapi_key, idea=args.idea))
File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/aitoofaan/llms/autoagents/AutoAgents/main.py", line 30, in commanline
await startup.startup(idea, investment, n_round, llm_api_key=llm_api_key, serpapi_key=serpapi_key, proxy=proxy)
File "/home/aitoofaan/llms/autoagents/AutoAgents/startup.py", line 14, in startup
await explorer.run(n_round=n_round)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/explorer.py", line 57, in run
await self.environment.run()
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/environment.py", line 192, in run
await asyncio.gather(*futures)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/roles/role.py", line 239, in run
rsp = await self._react()
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/roles/role.py", line 207, in _react
await self._think()
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/roles/role.py", line 155, in _think
next_state = await self._llm.aask(prompt)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/system/provider/base_gpt_api.py", line 42, in aask
rsp = await self.acompletion_text(message, stream=True)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/system/provider/openai_api.py", line 33, in wrapper
return await f(*args, **kwargs)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/system/provider/openai_api.py", line 230, in acompletion_text
return await self._achat_completion_stream(messages)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/system/provider/openai_api.py", line 173, in _achat_completion_stream
async for chunk in response:
TypeError: 'async for' requires an object with aiter method, got generator

iCGY96 commented

We have just made some updates to the requirements.txt file and you can give it another try. The updated file includes some additional packages that are not essential, and we plan to streamline the requirements.txt file in the future.

ok so i did git pull and it grabbed the new requirements.txt and when I run the command now I get the following error...

python main.py --mode commandline --llm_api_key sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx --serpapi_key xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx --idea "write a trending and engagning newsletter about ai for b2b enterprises"
2023-10-15 15:41:40.695 | INFO     | autoagents.system.config:__init__:43 - Config loading done.
2023-10-15 15:41:42.193 | INFO     | autoagents.explorer:invest:33 - Investment: $10.0.
Traceback (most recent call last):
  File "/home/aitoofaan/llms/autoagents/lib/python3.10/site-packages/litellm/utils.py", line 3379, in __next__
    chunk = next(self.completion_stream)
  File "/home/aitoofaan/llms/autoagents/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 166, in <genexpr>
    return (
  File "/home/aitoofaan/llms/autoagents/lib/python3.10/site-packages/openai/api_requestor.py", line 612, in <genexpr>
    self._interpret_response_line(
  File "/home/aitoofaan/llms/autoagents/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.APIError: The server had an error while processing your request. Sorry about that! (Error occurred while streaming.)

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Traceback (most recent call last):
  File "/home/aitoofaan/llms/autoagents/lib/python3.10/site-packages/litellm/utils.py", line 3379, in __next__
    chunk = next(self.completion_stream)
  File "/home/aitoofaan/llms/autoagents/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 166, in <genexpr>
    return (
  File "/home/aitoofaan/llms/autoagents/lib/python3.10/site-packages/openai/api_requestor.py", line 612, in <genexpr>
    self._interpret_response_line(
  File "/home/aitoofaan/llms/autoagents/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.APIError: The server had an error while processing your request. Sorry about that! (Error occurred while streaming.)

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

2023-10-15 15:41:48.823 | INFO     | autoagents.system.provider.openai_api:update_cost:95 - Total running cost: $0.001 | Max budget: $10.000 | Current cost: $0.001, prompt_tokens=250, completion_tokens=1
2023-10-15 15:41:48.825 | INFO     | autoagents.roles.manager:_act:25 - Ethan(Manager): ready to CreateRoles
0## Thought
Based on the given task, we need to create a trending and engaging newsletter about AI for B2B enterprises. To accomplish this, we will need to select existing expert roles and create new expert roles as necessary. We will also need to develop an execution plan to guide the process.

## Question or Task
Write a trending and engaging newsletter about AI for B2B enterprises.

## Selected Roles List:
[]

## Created Roles List:
[]

## Execution Plan:
1. Language Expert: Based on the previous steps, please provide a helpful, relevant, accurate, and detailed response to the user's original question: "Write a trending and engaging newsletter about AI for B2B enterprises."

## RoleFeedback
None

## PlanFeedback
2023-10-15 15:42:13.302 | INFO     | autoagents.system.provider.openai_api:update_cost:95 - Total running cost: $0.007 | Max budget: $10.000 | Current cost: $0.006, prompt_tokens=1854, completion_tokens=153
NoneTraceback (most recent call last):
  File "/home/aitoofaan/llms/autoagents/AutoAgents/main.py", line 56, in <module>
    asyncio.run(commanline(proxy=proxy, llm_api_key=args.llm_api_key, serpapi_key=args.serpapi_key, idea=args.idea))
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/aitoofaan/llms/autoagents/AutoAgents/main.py", line 30, in commanline
    await startup.startup(idea, investment, n_round, llm_api_key=llm_api_key, serpapi_key=serpapi_key, proxy=proxy)
  File "/home/aitoofaan/llms/autoagents/AutoAgents/startup.py", line 14, in startup
    await explorer.run(n_round=n_round)
  File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/explorer.py", line 57, in run
    await self.environment.run()
  File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/environment.py", line 192, in run
    await asyncio.gather(*futures)
  File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/roles/role.py", line 239, in run
    rsp = await self._react()
  File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/roles/role.py", line 209, in _react
    return await self._act()
  File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/roles/manager.py", line 38, in _act
    _suggestions_roles = await self._rc.todo.run(response.content, history=history_roles)
  File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/actions/check_roles.py", line 101, in run
    question = re.findall('## Question or Task:([\s\S]*?)##', str(context))[0]
IndexError: list index out of range
iCGY96 commented

Currently, our preferred choice is GPT-4, and we are in the process of adjusting to other models.

if you can please add support for the following models:

gpt-3.5-turbo
gpt-3.5-turbo-16k
gpt-3.5-turbo-instruct
gpt-3.5-turbo-0613
gpt-3.5-turbo-16k-0613

drnic commented
$ pip3 install -r requirements.txt
...
ERROR: Could not find a version that satisfies the requirement mkl-service==2.4.0 (from versions: none)
ERROR: No matching distribution found for mkl-service==2.4.0

$ pip3 install mkl-service
ERROR: Could not find a version that satisfies the requirement mkl-service (from versions: none)
ERROR: No matching distribution found for mkl-service

Looks like this pip package is for Intel only; and I'm on M2 Apple Silicon.

https://pypi.org/project/mkl-service/

drnic commented

After recreating my conda env, removing mkl-service from requirements.txt, and running the command again, I look to get a LiteLLM error?

$ python main.py --mode commandline --idea "Write a poem about the last five Australian prime ministers" --llm_api_key=$OPENAI_API_KEY --serpapi_key=$SERPAPI_API_KEY
2023-10-16 07:08:47.349 | INFO     | autoagents.system.config:__init__:43 - Config loading done.
2023-10-16 07:08:47.936 | INFO     | autoagents.explorer:invest:33 - Investment: $10.0.
0Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.
2023-10-16 07:08:48.708 | INFO     | autoagents.system.provider.openai_api:update_cost:95 - Total running cost: $0.007 | Max budget: $10.000 | Current cost: $0.007, prompt_tokens=245, completion_tokens=1
2023-10-16 07:08:48.709 | INFO     | autoagents.roles.manager:_act:25 - Ethan(Manager): ready to CreateRoles

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Is this related to new commit that drops ROLES_LIST to []? 46ae81c (Update: no, I went back before that commit and I get the same output above)

I set litellm.set_verbose = True and I can see that litellm.ai is returning an error original_response': "This model's maximum context length is 8192 tokens. However, you requested 9595 tokens (2095 in the messages, 7500 in the completion). Please reduce the length of the messages or completion.

I tried using gpt-4-32k but apparently I don't have access to it https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4

iCGY96 commented

Currently, our preferred choice is GPT-4, and we are in the process of adjusting to other models.

Okay, we will first adapt to GPT-3.5

iCGY96 commented

After recreating my conda env, removing mkl-service from requirements.txt, and running the command again, I look to get a LiteLLM error?

$ python main.py --mode commandline --idea "Write a poem about the last five Australian prime ministers" --llm_api_key=$OPENAI_API_KEY --serpapi_key=$SERPAPI_API_KEY
2023-10-16 07:08:47.349 | INFO     | autoagents.system.config:__init__:43 - Config loading done.
2023-10-16 07:08:47.936 | INFO     | autoagents.explorer:invest:33 - Investment: $10.0.
0Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.
2023-10-16 07:08:48.708 | INFO     | autoagents.system.provider.openai_api:update_cost:95 - Total running cost: $0.007 | Max budget: $10.000 | Current cost: $0.007, prompt_tokens=245, completion_tokens=1
2023-10-16 07:08:48.709 | INFO     | autoagents.roles.manager:_act:25 - Ethan(Manager): ready to CreateRoles

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Is this related to new commit that drops ROLES_LIST to []? 46ae81c (Update: no, I went back before that commit and I get the same output above)

I set litellm.set_verbose = True and I can see that litellm.ai is returning an error original_response': "This model's maximum context length is 8192 tokens. However, you requested 9595 tokens (2095 in the messages, 7500 in the completion). Please reduce the length of the messages or completion.

I tried using gpt-4-32k but apparently I don't have access to it https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4

After recreating my conda env, removing mkl-service from requirements.txt, and running the command again, I look to get a LiteLLM error?

$ python main.py --mode commandline --idea "Write a poem about the last five Australian prime ministers" --llm_api_key=$OPENAI_API_KEY --serpapi_key=$SERPAPI_API_KEY
2023-10-16 07:08:47.349 | INFO     | autoagents.system.config:__init__:43 - Config loading done.
2023-10-16 07:08:47.936 | INFO     | autoagents.explorer:invest:33 - Investment: $10.0.
0Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.
2023-10-16 07:08:48.708 | INFO     | autoagents.system.provider.openai_api:update_cost:95 - Total running cost: $0.007 | Max budget: $10.000 | Current cost: $0.007, prompt_tokens=245, completion_tokens=1
2023-10-16 07:08:48.709 | INFO     | autoagents.roles.manager:_act:25 - Ethan(Manager): ready to CreateRoles

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Is this related to new commit that drops ROLES_LIST to []? 46ae81c (Update: no, I went back before that commit and I get the same output above)

I set litellm.set_verbose = True and I can see that litellm.ai is returning an error original_response': "This model's maximum context length is 8192 tokens. However, you requested 9595 tokens (2095 in the messages, 7500 in the completion). Please reduce the length of the messages or completion.

I tried using gpt-4-32k but apparently I don't have access to it https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4

As GPT-3.5 is unstable, we only offer GPT-4 support at the moment. We are in the process of adapting our system to other models. You can try using a GPT-4 key.

drnic commented

I am using GPT-4.

Sorry, I accidentally replied to the wrong thread. I started in #27 but have accidentally moved into #29.