This is a simple Next.js boilerplate project that demonstrates how to integrate the OpenAI API with a Next.js application. The boilerplate includes a basic user interface for submitting prompts and displaying the response from the OpenAI API.
- Next.js setup with a minimalistic user interface
- Customizable prompts separated in a different folder
- Utilizes OpenAI's GPT-3.5 Turbo model for chat completion
- Built-in error handling and loading state management
- Responsive design that works on both desktop and mobile devices
Before you start, you'll need to have the following installed on your machine:
-
Clone this repository:
git clone https://github.com/your-username/nextjs-openai-boilerplate.git
-
Navigate to the project directory:
cd nextjs-openai-boilerplate
-
Install dependencies:
npm install
or
yarn install
-
Create a
.env.local
file in the project root directory and add your OpenAI API key:OPENAI_API_KEY=your_openai_api_key
Replace
your_openai_api_key
with your actual OpenAI API key. You can find your API key in your OpenAI Dashboard. -
Start the development server:
npm run dev
or
yarn dev
-
Open your browser and navigate to http://localhost:3000. You should now see the boilerplate application running.
The openai.createChatCompletion()
function is part of the OpenAI API, which allows developers to generate text using machine learning models. Specifically, createChatCompletion()
is used to generate completions for chat-based models, such as the GPT-3.5-turbo model, which is designed for generating responses in a conversational format.
const completion = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: messages,
temperature: 0,
max_tokens: 510,
top_p: 0,
});
model: (string, required) - Specifies the name of the model to use for generating completions. In this example, "gpt-3.5-turbo" is used, which is a chat-based language model.
messages: (array of objects, required) - Specifies an array of message objects, where each object has a role ("system", "user", or "assistant") and content (the actual text of the message). Messages are processed in the order they appear in the array, and the assistant responds accordingly.
temperature: (number, optional) - Controls the randomness of the generated completions. A higher value (e.g., 0.8) makes the output more random, while a lower value (e.g., 0.2) makes it more deterministic.
max_tokens: (number, optional) - Specifies the maximum number of tokens (words or word pieces) in the generated completion. Use this to limit the length of the output.
top_p: (number, optional) - Controls the diversity of the generated completions by limiting the choices to a subset of the most likely tokens. A lower value (e.g., 0.2) makes the output more focused, while a higher value (e.g., 0.8) makes it more diverse.
Form more please use the OpenAI documantation.
To customize the prompts or add new message types, update the defaultPrompts
array in the prompts/defaultPrompts.js
file or create new files for different sets of prompts in the prompts
folder.
To deploy your application, you can use Vercel, the platform built by the creators of Next.js. Follow the official Next.js deployment documentation for a detailed guide on deploying your Next.js application to Vercel or other hosting providers.
Contributions are welcome! If you'd like to contribute, feel free to submit a pull request or create an issue on the repository.
This project is licensed under the MIT License. See the LICENSE file for more details.