langchain-ai/langchainjs

baseURL parameter of ChatOpenAI not working for Nvidia Nemotron

DevDeepakBhattarai opened this issue ยท 5 comments

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

This is my code to select the model

async (modelId: string, userId: string, temperature = 0.7) => { // nvidia/llama-3.1-nemotron-70b-instruct this is the model id 
    try {
      let apiKey: string;
      const data = await redis.get(`user:${userId}:nvidia`);
      if (data) {
        const decryptedData = decryptObject<DefaultApiData>(
          data as Record<string, unknown>,
        );
        apiKey = decryptedData.apiKey;
      } else {
        apiKey = env.NVIDIA_API_KEY;
      }
      console.log(apiKey);
      return new ChatOpenAI({
        apiKey: apiKey,
        configuration: {
          baseURL: "https://integrate.api.nvidia.com/v1",
        },
        temperature,
        model: modelId,
        maxTokens: 2048,
      });
    } catch (error) {
      throw new Error("Failed to initialize Nvidai model");
    }
  },

P.S: All the api key are correct , it not the issue with that, I tested in thunderclient, the api key works and I get the response back.

Error Message and Stack Trace (if applicable)

401 status code (no body)

Error: 401 status code (no body)
at APIError.generate (webpack-internal:///(rsc)/./node_modules/.pnpm/openai@4.73.1_encoding@0.1.13_zod@3.23.8/node_modules/openai/error.mjs:63:20)
at OpenAI.makeStatusError (webpack-internal:///(rsc)/./node_modules/.pnpm/openai@4.73.1_encoding@0.1.13_zod@3.23.8/node_modules/openai/core.mjs:323:65)
at OpenAI.makeRequest (webpack-internal:///(rsc)/./node_modules/.pnpm/openai@4.73.1_encoding@0.1.13_zod@3.23.8/node_modules/openai/core.mjs:367:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (webpack-internal:///(rsc)/./node_modules/.pnpm/@langchain+openai@0.1.3_encoding@0.1.13/node_modules/@langchain/openai/dist/chat_models.js:814:29)
at async RetryOperation.eval [as _fn] (webpack-internal:///(rsc)/./node_modules/.pnpm/p-retry@4.6.2/node_modules/p-retry/index.js:50:12)

Description

I am building a AI app where we can chose different AI models to certain task. I want to use this nemotron model but seems like it is not implemented in langchain and the baseURL of doing the thing is also does not work.

System Info

pnpm 9.6,
windows 11
node js 20.x

The issue you're encountering with the baseURL parameter not working for Nvidia Nemotron in the ChatOpenAI class might be due to compatibility or configuration issues specific to the Nvidia API. Here are a few things to consider:

  1. API Compatibility: The ChatOpenAI class in LangChain.js is designed to work with OpenAI's API. If Nvidia Nemotron's API has different authentication mechanisms or endpoint requirements, the ChatOpenAI class might not be directly compatible.

  2. Authentication: Ensure that the authentication method required by Nvidia Nemotron is correctly implemented. The 401 status code indicates an authentication issue. Double-check if Nvidia requires additional headers or a different format for the API key.

  3. Custom Implementation: If the Nvidia API has specific requirements that are not supported by the ChatOpenAI class, you might need to implement a custom client to handle requests to Nvidia's API.

  4. Debugging: Use a tool like Postman or Thunder Client to manually test the API with the same parameters and headers to ensure that the issue is not with the API itself.

  5. Error Handling: Since the error message indicates a 401 status code with no body, it might be helpful to log the full request details (excluding sensitive information) to verify that all required parameters and headers are being sent correctly.

If these steps do not resolve the issue, you may need to look into creating a custom integration for Nvidia Nemotron that aligns with its specific API requirements.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosu I am trying to use api endpoint that is openai compatible.

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: '$API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC',
  baseURL: 'https://integrate.api.nvidia.com/v1',
})

async function main() {
  const completion = await openai.chat.completions.create({
    model: "nvidia/llama-3.1-nemotron-70b-instruct",
    messages: [{"role":"user","content":"Write a limerick about the wonders of GPU computing."}],
    temperature: 0.5,
    top_p: 1,
    max_tokens: 1024,
    stream: true,
  })
   
  for await (const chunk of completion) {
    process.stdout.write(chunk.choices[0]?.delta?.content || '')
  }
  
}

main();

Here is the doc on how to use the nemotron model direclty form the nvidia website.

Now my question is how can I change the baseURL where the ChatOpenAI hits and send the body to that baseURL ?
Is it possible ?

Yes, you can change the baseURL in the ChatOpenAI class to send requests to a different endpoint, such as Nvidia's API, while maintaining compatibility with the OpenAI API. You can do this by passing a configuration parameter with the desired baseURL. Here's how you can configure it:

import { ChatOpenAI } from "@langchain/openai";

const llmWithNvidiaAPI = new ChatOpenAI({
  apiKey: '$API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC',
  temperature: 0.5,
  configuration: {
    baseURL: "https://integrate.api.nvidia.com/v1", // Nvidia API endpoint
  },
  model: "nvidia/llama-3.1-nemotron-70b-instruct",
  maxTokens: 1024,
  stream: true,
});

await llmWithNvidiaAPI.invoke("Write a limerick about the wonders of GPU computing.");

This setup allows you to specify a custom base URL, directing requests to the Nvidia API endpoint. Ensure that the API key and other parameters are correctly configured to match Nvidia's requirements [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosu This is exactly what I am doing but it does not work.
I get a error

401 status code (no body)

Error: 401 status code (no body)
at APIError.generate (webpack-internal:///(rsc)/./node_modules/.pnpm/openai@4.73.1_encoding@0.1.13_zod@3.23.8/node_modules/openai/error.mjs:63:20)
at OpenAI.makeStatusError (webpack-internal:///(rsc)/./node_modules/.pnpm/openai@4.73.1_encoding@0.1.13_zod@3.23.8/node_modules/openai/core.mjs:323:65)
at OpenAI.makeRequest (webpack-internal:///(rsc)/./node_modules/.pnpm/openai@4.73.1_encoding@0.1.13_zod@3.23.8/node_modules/openai/core.mjs:367:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (webpack-internal:///(rsc)/./node_modules/.pnpm/@langchain+openai@0.1.3_encoding@0.1.13/node_modules/@langchain/openai/dist/chat_models.js:814:29)
at async RetryOperation.eval [as _fn] (webpack-internal:///(rsc)/./node_modules/.pnpm/p-retry@4.6.2/node_modules/p-retry/index.js:50:12)

To resolve the 401 status code error when using the Nvidia API with the ChatOpenAI class, ensure the following:

  1. API Key Verification: Double-check that your API key is correct. If you're using an environment variable, ensure it's correctly set and accessible.

  2. Explicit API Key Passing: Try explicitly passing the apiKey parameter to rule out issues with environment variables:

    const model = new ChatOpenAI({
      apiKey: "YOUR_KEY_HERE",
      configuration: {
        baseURL: "https://integrate.api.nvidia.com/v1",
      },
      model: "nvidia/llama-3.1-nemotron-70b-instruct",
      temperature: 0.5,
      maxTokens: 1024,
      stream: true,
    });
  3. Custom Headers: If Nvidia's API requires specific headers, ensure they are included. You can set custom headers using the configuration field:

    const model = new ChatOpenAI({
      apiKey: "YOUR_KEY_HERE",
      configuration: {
        baseURL: "https://integrate.api.nvidia.com/v1",
        defaultHeaders: {
          "Authorization": `Bearer YOUR_KEY_HERE`,
          // Add any other required headers here
        },
      },
      model: "nvidia/llama-3.1-nemotron-70b-instruct",
      temperature: 0.5,
      maxTokens: 1024,
      stream: true,
    });
  4. Endpoint Compatibility: Ensure that the Nvidia API endpoint is fully compatible with the OpenAI API structure expected by the ChatOpenAI class.

These steps should help address the authentication error [1][2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other