Zain-ul-din/whatsapp-ai-bot

[Enhancement] - Using different prompts to signify which model to choose is strange.

IIvexII opened this issue · 3 comments

You can make a single prompt and let your model choose which model will suit this input text.

Input

!bot Generate an image of a black cat with light green eyes

Note: Choose DALL-E automatically and generate an image.

Output

Input

!bot what is HTTP?

Note: Choose ChatGPT automatically.

Output

HTTP stands for Hypertext Transfer Protocol, which is a protocol used to transfer data over the internet. It is a standard application layer protocol that defines how data is transmitted between web servers and web browsers.

You could create a custom model to achieve this functionality.

Creating a Custom Model

To create a custom model, you need to add a new field in models.Custom. The modelName field specifies the name of the model, and the prefix field specifies the prefix that the model uses. The enable field determines whether the model is enabled or not, and the context field specifies the context that the model uses to generate responses. The context can be a string of text, a file path, or a URL.

  • modelName: a string that represents the name of your custom model
  • prefix: a string that represents the prefix that messages should have to get a reply from your custom model
  • enable: a boolean that indicates whether your custom model should be enabled or disabled
  • context: a string that represents the context of your custom model. This can be one of the following options:
    "your_context": a string that represents the context directly
    "path to file (.md,.txt)": a string that represents the path to a file containing the context
    "url": a string that represents the URL to a website containing the context.
{
    modelName: "your_model_name",
    prefix: "!your_prefix", 
    enable: true, 
    context: "your_context" | "path to file (.md,.txt)" | "url",
}

Test your model

  • run server yarn dev
  • type a message starting with !your_prefix.

Demo

  • Name you're model let's call it bot
  • Add prefix !bot.
  • create bot.md file in static folder.
  • add context context: "./static/whatsapp-ai-bot.md"
  • set enable to true
  • add the following context
Hey GPT, if the provided question seems to be asking for an image then just return the question with the prefix !dalle otherwise return the question with the prefix !chatgpt

Examples:- 

question: 
   Generate an image of a black cat with light green eyes
   you should return =>   !dalle Generate an image of a black cat with light green eyes
 
question: 
  what is HTTP?
  you should return => !chatgpt what is HTTP?
  
Example End.
  

Results

image
image

Note! Although this approach will work. But, there is an overhead check this diagram how this process is working.

image

See more about how the custom model works under the hood


Apart from that, To achieve this kind of functionality we need to use an NLP model on the server side which may slow down the response.
Note! All models being used in this bot is not free any mistake may cause a loss of money.

Maybe in the future, I plan to use a custom TensorFlow NLP model

or

Another option is: Google is releasing a new AI model Bard which can respond to messages in latency of seconds.

Bard Overview

error /root/WhatsApp-Ai-bot/node_modules/whatsapp-web.js/node_modules/puppeteer: Command failed.
Exit code: 1
Command: node install.js
Arguments:
Directory: /root/WhatsApp-Ai-bot/node_modules/whatsapp-web.js/node_modules/puppeteer
Output:
The chromium binary is not available for arm64.
If you are on Ubuntu, you can install with:

sudo apt install chromium

sudo apt install chromium-browser

/root/WhatsApp-Ai-bot/node_modules/whatsapp-web.js/node_modules/puppeteer/lib/cjs/puppeteer/node/BrowserFetcher.js:119
throw new Error();
^

Error
at /root/WhatsApp-Ai-bot/node_modules/whatsapp-web.js/node_modules/puppeteer/lib/cjs/puppeteer/node/BrowserFetcher.js:119:27
at FSReqCallback.oncomplete (node:fs:198:21)

Node.js v22.3.

How to solve it?

move this discussion here