BerriAI/litellm

[Feature]: Add GitHub Copilot as model provider

Opened this issue · 7 comments

The Feature

Hello!

Please add GitHub Copilot as model provider for the proxy.

Should be possible like this: https://github.com/olimorris/codecompanion.nvim/blob/5c5a5c759b8c925e81f8584a0279eefc8a6c6643/lua/codecompanion/adapters/copilot.lua

Idea taken from: cline/cline#660

Thank you!

Motivation, pitch

Having GitHub Copilot as an option would extend the range of model providers usable with LiteLLM.

Twitter / LinkedIn details

@Jasmin68k

a pr here is welcome - not sure i can see what their api spec is

I haven't looked into it much, just stumbled upon the idea in the Cline repo (issue linked in op).

Looking at the code, it seems a user would supply a GitHub token, which is used to get a token for Copilot, which is used for API requests, instead of using a user supplied API Key directly.

Since the code reuses much of the OpenAI adapter, I would assume the API to be (mostly) OpenAI compatible.

That's all I know for now.

I am also not familiar with the LiteLLM codebase at all, so can't quickly draw up a PR.

I might look into it, but can't make any promises.

Maybe/hopefully someone more familiar with LiteLLM is up to the task!?

Some more details here: Aider-AI/aider#2227 (comment).

a pr here is welcome - not sure i can see what their api spec is

The api is openai compatible, with the endpoint https://api.githubcopilot.com/chat/completions. The /models works too.

To get the token you have to get the user's authorization token, then use this to get the copilot token. The good news is that the copilot token seems to not expire.

    resp = requests.post('https://github.com/login/device/code', headers={
            'accept': 'application/json',
            'editor-version': 'Neovim/0.6.1',
            'editor-plugin-version': 'copilot.vim/1.16.0',
            'content-type': 'application/json',
            'user-agent': 'GithubCopilot/1.155.0',
            'accept-encoding': 'gzip,deflate,br'
        }, data='{"client_id":"Iv1.b507a08c87ecfe98","scope":"read:user"}')

    # Parse the response json, isolating the device_code, user_code, and verification_uri
    resp_json = resp.json()
    device_code = resp_json.get('device_code')
    user_code = resp_json.get('user_code')
    verification_uri = resp_json.get('verification_uri')

    # Print the user code and verification uri
    print(f'Please visit {verification_uri} and enter code {user_code} to authenticate.')

    while True:
        time.sleep(5)

        resp = requests.post('https://github.com/login/oauth/access_token', headers={
            'accept': 'application/json',
            'editor-version': 'Neovim/0.6.1',
            'editor-plugin-version': 'copilot.vim/1.16.0',
            'content-type': 'application/json',
            'user-agent': 'GithubCopilot/1.155.0',
            'accept-encoding': 'gzip,deflate,br'
            }, data=f'{{"client_id":"Iv1.b507a08c87ecfe98","device_code":"{device_code}","grant_type":"urn:ietf:params:oauth:grant-type:device_code"}}')

        # Parse the response json, isolating the access_token
        resp_json = resp.json()
        access_token = resp_json.get('access_token')

        if access_token:
            break
    print('Authentication success!')

    # Get a copilot token with the access token
    resp = requests.get('https://api.github.com/copilot_internal/v2/token', headers={
        'authorization': f'token {access_token}',
        'editor-version': 'Neovim/0.6.1',
        'editor-plugin-version': 'copilot.vim/1.16.0',
        'user-agent': 'GithubCopilot/1.155.0'
    })

    # Parse the response json, isolating the token
    resp_json = resp.json()
    token = resp_json.get('token')
    print('Token:', token)

I'd love to give it a try. Is it currently possible to use 'GH-Copilot' as a provider in LiteLLM, considering its OpenAI compatibility?

If so, could someone guide me on how to configure it?

Alternatively, if additional implementation is required (such as obtaining the authorization token to set the header in the request), I would greatly appreciate any pointers on the best approach!

I was considering implementing a custom LLM, like in this example: Custom LLM Example, but I'm unsure if that’s the best approach or how to integrate it to use it.

or trying something directly in?

elif custom_llm_provider == "custom":

@RodolfoCastanheira, have you already implemented this or do you have any hints on how to approach it?

gdw2 commented

Is this different than the github support documented here?