santiagobasulto/ipython-gpt

Improvements planned/needed

santiagobasulto opened this issue · 7 comments

Code design and architecture

  • Authentication: I can probably subclassrequests.Session while keeping the OpenAI API Key in some context to avoid the repetition
  • Subcommand architecture: some methods in the IPythonGPT class are subcommands, while others are utility methods, but not all subcommands are behaving equally.
  • Parameter parsing: Related to subcommands; there's a global argument parser shared by all the subcommands. Each command should define its own parser while extending the base parameters. I'm not sure if argparse is the correct alternative.
  • Displaying and formatting: I need to improve the way the results are displayed for different types of interfaces (notebook, shell, etc)

Code quality

  • Testing: Some testing wouldn't be bad 🤡
  • Create its own package: Move ipython_gpt.py from module to a package
  • Code formatting: Should incorporate black
  • CI/CD Pipeline: A minimum pipeline to check formatting and test coverage

Feature improvements

  • Streaming: Test support for stream=True.
  • Debug mode: some way to see/debug what the client is doing behind the scenes.

can we help ? I'm not too bad at packaging.

Note that "ipygpt" is available for both conda and pypi

Yes! Definitively! I was going to start with the proper breakup of the different commands, and some testing. But any of the points above are out for the grab if you have some time.

Do you think ipygpt is a better name than ipython-gpt?

it's easier to type and lots of interactive libs have ipy in ther name:
ipyleaflet, ipyvuetify, ipywidget ...

it also avoids the issue with pip that is forcing "-" in lib name as separator (sepal-ui and sepal_ui is the same) and python forcing "_" when importing (you can import sepal_ui but can't import sepal-ui)

I'll send few packaging PR in the upcoming days then, I'm thinking about pre-commit, nox and a sphinx based simple documentation what do you think ?

👍 perf, I'll mention you in this issue once i push the new subcommands so you're aware of it.

How would you recommend capturing output into a string?

Added streaming support: #12

Major changes are:

  1. I noticed that the code base uses http.client which does not have Event Stream support quite like (https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb) therequests library. I think it would be much easier to just use requests than to implement logic wrapping http.client to handle these.

  2. To stream we need layers of generators that can process the streaming response in a pythonic way, I have made the execute method of the ChatCommand and the Http Client library do that.

  3. Added some unit tests as well to support streaming.