INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 403 Forbidden"
avolcoff opened this issue · 4 comments
Expected Behavior
I expect the gpte command to run successfully and generate code based on the provided prompt without any errors.
Current Behavior
Running the gpte command results in an HTTP 403 Forbidden error, indicating that the specified project does not have access to the gpt-4o model.
although I set my .env to use MODEL_NAME=gpt-3.5-turbo-16k
Failure Information
The error occurs when trying to invoke the OpenAI API for generating code. The environment is a Windows machine, and the project setup details are as follows:
OS: Windows 10
Python Version: 3.10
GPT Engineer Version: stable
Command Run: gpte .\HelloWorld\
Failure Logs
Running gpt-engineer in C:\Users\avolc\OneDrive\Desktop\GPT-Engineer\HelloWorld
Using prompt from file: prompt
create an hello world app in python
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 403 Forbidden"
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\Scripts\gpte.exe_main_.py", line 7, in
sys.exit(app())
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gpt_engineer\applications\cli\main.py", line 480, in main
files_dict = agent.init(prompt)
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gpt_engineer\applications\cli\cli_agent.py", line 166, in init
files_dict = self.code_gen_fn(
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gpt_engineer\core\default\steps.py", line 144, in gen_code
messages = ai.start(
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gpt_engineer\core\ai.py", line 143, in start
return self.next(messages, step_name=step_name)
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gpt_engineer\core\ai.py", line 243, in next
response = self.backoff_inference(messages)
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\backoff_sync.py", line 105, in retry
ret = target(*args, **kwargs)
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gpt_engineer\core\ai.py", line 287, in backoff_inference
return self.llm.invoke(messages) # type: ignore
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain_core\language_models\chat_models.py", line 248, in invoke
self.generate_prompt(
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain_core\language_models\chat_models.py", line 681, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain_core\language_models\chat_models.py", line 538, in generate
raise e
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain_core\language_models\chat_models.py", line 528, in generate
self._generate_with_cache(
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain_core\language_models\chat_models.py", line 753, in _generate_with_cache
result = self._generate(
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain_openai\chat_models\base.py", line 545, in _generate
return generate_from_stream(stream_iter)
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain_core\language_models\chat_models.py", line 83, in generate_from_stream
for chunk in stream:
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain_openai\chat_models\base.py", line 487, in _stream
with self.client.create(**payload) as response:
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\openai_utils_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\openai\resources\chat\completions.py", line 643, in create
return self._post(
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\openai_base_client.py", line 1261, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\openai_base_client.py", line 942, in request
return self._request(
File "C:\Users\avolc\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\openai_base_client.py", line 1041, in _request
raise self._make_status_error_from_response(err.response) from None
openai.PermissionDeniedError: Error code: 403 - {'error': {'message': 'Project proj_V1gucEnpxxTjItbQ046K2ATJ does not have access to model gpt-4o', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
Note that I tested the API KEY and it works fine with some open AI code sample
Until a fix is provided you can use the workaround of adding the --model to the command
e.g.
gpte .\HelloWorld\ --model gpt-3.5-turbo-16k
Heya! Using the CLI parameter, as you've described, is the officially supported way to configure the model name.
I recall us discussing enabling model configuration using environment variables in the past, but I don't believe we have implemented that fully yet. @similato87 please correct me if I'm wrong about that.
I recall us discussing enabling model configuration using environment variables in the past, but I don't believe we have implemented that fully yet. @similato87 please correct me if I'm wrong about that.
No, we haven't. Now, the CLI parameter is the only option. After I finish my PR, we will have most of the settings in the configuration/environment file.
Thanks for the confirmation @similato87! Closing this issue as resolved.