0xStabby/chatgpt-vim

Accessing OpenAI API fails

Melclic opened this issue · 7 comments

I configured openAI API and it works fine in bash but not using the app:

GptFile returns:

Traceback (most recent call last):
  File "/usr/local/bin/openai", line 8, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai_cli/cli.py", line 24, in complete
    result = client.generate_response(prompt, model)
  File "/usr/local/lib/python3.10/site-packages/openai_cli/client.py", line 36, in generate_response
    response.raise_for_status()
  File "/usr/local/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.openai.com/v1/completions```

GptRun returns:

```Run with: Make a doctring for DESeq2
Make: *** No rule to make target `a'.  Stop.
Traceback (most recent call last):
  File "/usr/local/bin/openai", line 8, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai_cli/cli.py", line 24, in complete
    result = client.generate_response(prompt, model)
  File "/usr/local/lib/python3.10/site-packages/openai_cli/client.py", line 36, in generate_response
    response.raise_for_status()
  File "/usr/local/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.openai.com/v1/completions```

@Melclic
Does this happen with all files? Also what prompt did you give?

Your token is in ~/.config/openai.token?

Hi @0xStabby,

Does this happen with all files?

It does happen with all files (python, simple text files, etc...)

Also what prompt did you give?

Not sure what you mean by that? The commands I used that cause the problem are :GptRun and :GptFile

Your token is in ~/.config/openai.token?

It is indeed. I can use the bash command to call the openAPI so I know its working

Hmm okay, so from my testing I get this error when the file is there but the token is not correctly set.

What openai-cli command works? Could you show an example.

My ~/.config/openai.token(obfuscated):
image

The command that runs for gpf (:GptFile) is

let output = system("(echo '" . prompt . "'; cat " . currentFile . ";) | openai complete - -t $(cat ~/.config/openai.token)")

Basically echo "the file" | openai complete - -t $(cat ~/.config/openai.token)
So echo the file piped to openai complete then using the output of the openai.token file for the api token.
See if this command works, double the output of cat ~/.config/openai.token.

Just wondering if maybe you have set an environment variable to use openai with and that's why those commands are working but the openai.token is maybe set incorrectly.

I suppose I should add something to check for that env variable if it does exist, to use that instead.

Okay, I just added in support to use the env var if it exists, if not then will fall back to the openai.token.

Perhaps this update will just solve it for you without any extra fiddling

4e289a3

Yup that did it! Thanks a lot

Awesome! Great to hear!