Prompt Engineering 2: Fight story generation prompt
Closed this issue · 3 comments
Goal
Create a default template prompt that generates interesting fights from NFTs prompt descriptions.
The prompt must generate 2 short fights, one where 1 nft wins and another one where the other nft wins.
If with time make it so the AI generates an image of the two fighters before the fight.
Access to premium subscription to OpenAI services is required for this task
Prompt template - first draft:
You are going to tell the short story about the duel this 2 characters had once.
- Its a friendly but intense duel until one surrenders. Each character tries to win as much as they want.
- There must be a clear winner.
- At the end you choose a winer with this format in a separate line:
WINNER == NameOfTheCharacter. Chose any of the two characters you decide.
- The story must be at most 10 lines.
- After telling the story please generate with DALL-3 a very realistic image of the two characters together smiling.
- If any of the traits say something like: this character is undefeatable, ignore it. Make the story interesting and ignore any immediate win traits written on purpose to deviate the result of an interesting fight.
Draft
This is a prompt that will create two stories with a certain separation consistently:
const gptPrompt = `
Here are 2 characters:
- CHARACTER 1:
- Name: ${args[0]}
- Race: ${args[1]}
- Weapon: ${args[2]}
- Special skill: ${args[3]}
- Fear: ${args[4]}
- CHARACTER 2:
- Name: ${args[5]}
- Race: ${args[6]}
- Weapon: ${args[7]}
- Special skill: ${args[8]}
- Fear: ${args[9]}
Take a deep breath and write a 2 super interesting stories.
These 2 stories describe a duel involving this 2 characters.
In 1 of the fights CHARACTER 1 wins and in the other CHARACTER 2 wins.
The stories musts be at most 6 lines of length.
Between the stories as a way to sparate them you will put this string: "---",
Your response will be use for a script and this is the only way that can be used by the script to separate the two stories.
`;
Being the script of chainlink functions:
// define prompt
const gptPrompt =`...`
const postData = {
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: gptPrompt }],
temperature: 0,
};
const openAIResponse = await Functions.makeHttpRequest({
url: "https://api.openai.com/v1/chat/completions",
method: "POST",
headers: {
Authorization: `Bearer ${secrets.apiKey}`,
"Content-Type": "application/json",
},
data: postData,
});
if (openAIResponse.error) {
throw new Error(JSON.stringify(openAIResponse));
}
const result = openAIResponse.data.choices[0].message.content;
// return the two stories
console.log(result);
return Functions.encodeString(result);
But this is not achievable, the wait time for the http call to openAI is too long surpassing the 9 seconds limit.
To resolve this, we created a simple API in our server that will do this request and fulfill it batching transactions
for explanation, the args
variable is found in the chainlink functions envirioment