Importing Dependencies: const { Configuration, OpenAIApi } = require("openai"); const readlineSync = require("readline-sync"); require("dotenv").config(); The Configuration and OpenAIApi classes are imported from the openai package, which is used to connect to and use the OpenAI API. The readline-sync package is used to prompt the user for input. The dotenv package is used to load environment variables from a .env file. 2. Asynchronous Function:
(async () => { // Code here })(); The entire code is wrapped inside an immediately invoked asynchronous function, which allows the use of await inside the function body. 3. Setting up Configuration and API:
const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY, }); const openai = new OpenAIApi(configuration); A new Configuration instance is created with an API key obtained from the OPENAI_API_KEY environment variable. An OpenAIApi instance is created using the Configuration instance. 4. Chat History:
const history = []; An empty array is created to store the chat history, which is used to provide context to the AI model for generating responses. 5. Chat Loop:
while (true) { // Code here } A while loop is used to continuously prompt the user for input and generate responses until the user decides to end the conversation. 6. User Input:
const user_input = readlineSync.question("Your input: "); The readlineSync package is used to prompt the user for input, which is stored in the user_input variable. 7. Chat History for Context:
const messages = []; for (const [input_text, completion_text] of history) { messages.push({ role: "user", content: input_text }); messages.push({ role: "assistant", content: completion_text }); } messages.push({ role: "user", content: user_input }); A for loop is used to iterate through the chat history and create a list of messages in the format expected by the OpenAI ChatGPT API. Each message consists of a role (either "user" or "assistant") and content (the text of the message). The current user_input is added to the end of the messages array. 8. AI Model:
const completion = await openai.createChatCompletion({ model: "gpt-3.5-turbo", messages: messages, }); The openai.createChatCompletion method is used to generate a response from the OpenAI API. The model parameter specifies which AI model to use for generating the response. The messages parameter provides context for the AI model to generate the response. 9. Response from AI Model:
const completion_text = completion.data.choices[0].message.content; console.log(completion_text); The response generated by the AI model is extracted from the completion object and stored in the completion_text variable. The response is then printed to the console. 10. Chat History Update:
history.push([user_input, completion_text]); The current user_input and the response generated by the AI model (completion_text) are added to the end of the hostory array.
- Should Conversation be continued?
const user_input_again = readlineSync.question( "\nWould you like to continue the conversation? (Y/N)" ); if (user_input_again.toUpperCase() === "N") { return; } else if (user_input_again.toUpperCase() !== "Y") { console.log("Invalid input. Please enter 'Y' or 'N'."); return; } The user is prompted to either continue or end the conversation. The readlineSync.question method is used to read the user's input. If the user inputs “N”, the return statement exits the while loop and ends the program. If the user inputs anything other than “Y” or “N”, an error message is printed to the console and the program ends. 12. Error Handling:
catch (error) { if (error.response) { console.log(error.response.status); console.log(error.response.data); } else { console.log(error.message); } } If an error occurs during the API request, the catch block is executed. If the error has a response property, the HTTP status code and response data are printed to the console. Otherwise, the error message is printed to the console