Ever wished you could unleash the power of large language models with just a few lines of code? Look no further! With our new lollms_client_js
library, you can easily generate text using various parameters and configurations. Intrigued? Let's dive in!
Get started by installing the library with npm:
npm install lollms_client_js
The library provides functions to interact with the lollms server. On the server side, you need to install and load a certain number of personas, then you can use this library to generate text either using your own conditionning, or by summoning one of the personalities mounted on the server side.
For this moment, two functions are exposed:
generateText
: Generates text out of a prompt. If you specify a personality id (between 0 and nb mounted personalities) then you can directly send a user prompt. If you use -1 as personality id, then you need to format your prompt as you need. You can use simple completion such as sending "Once apon a time " and the AI wil ldo the completion, or yo ucan use one of our advised prompt formats. The advised prompt format is : !@>system: here you place your conditioning !@>user: here you put the user prompt !@>ai: You can also use instruct mode: !@>instruct: here you put the instruction !@>process: Lollms uses the !@> to detect role change, lollms also uses the model own eos token to detect end of sentense.listMountedPersonalities
: Lists all mounted personalities. The index of each personality can be used in generateText
To use the generation without personality, import the generateText
function from the library and begin generating text with the desired parameters:
import { generateText } from 'lollms_client_js';
// Prompt
const prompt = "Once upon a time";
const generatedText = await generateText(prompt);
console.log(generatedText);
The generateText
function supports the following parameters:
prompt
(string, required): Initial input text prompt for text generationhost
(string, optional, default: "http://localhost:9600"): Host URL of the lollms servermodel_name
(string, optional): Name of the model to use for text generationpersonality
(number, optional, default: -1): Personality index for the modeln_predict
(number, optional, default: 1024): Maximum number of tokens to generatestream
(boolean, optional, default: true): Stream generated text or nottemperature
(number, optional, default: 1.0): Sampling temperature for text generationtop_k
(number, optional, default: 50): Top-k sampling parameter for text generationtop_p
(number, optional, default: 0.95): Top-p sampling parameter for text generationrepeat_penalty
(number, optional, default: 0.8): Repeat penalty for text generationrepeat_last_n
(number, optional, default: 40): Number of tokens to consider for repeat penaltyseed
(number, optional): Random seed for text generationn_threads
(number, optional, default: 8): Number of threads to use for text generation
This function retrieves a list of all the personalities currently mounted on the lollms server. It's particularly useful for users who wish to explore the different personas available for text generation. By using this function, you can select a specific personality index to generate text that aligns with a particular tone, style, or domain knowledge.
To use this function, simply call it from the lollms_client_js
library. The function will return an array of personalities, each represented by an object containing at least an ID and a name. This allows you to easily identify and select the appropriate personality for your text generation needs.
import { listMountedPersonalities } from 'lollms_client_js';
async function showPersonalities() {
const personalities = await listMountedPersonalities();
console.log(personalities);
}
showPersonalities();
The function returns a promise that resolves to an array of objects. Each object represents a personality, containing the following properties:
id
(number): The unique identifier of the personality.name
(string): The name of the personality.
Example output:
[
{ "id": 0, "name": "Creative Writer" },
{ "id": 1, "name": "Tech Enthusiast" },
{ "id": 2, "name": "Science Fiction Guru" }
]
Check out and adapt the example below for a seamless start:
import { generateText } from 'lollms_client_js';
// Prompt
const prompt = "Once upon a time";
// Custom Configuration
const configurations = {
model_name: 'myModel',
personality: 1,
n_predict: 256,
temperature: 0.85,
top_k: 20,
repeat_penalty: 0.7,
};
// Generate Text
const generatedText = await generateText(prompt, configurations);
console.log(generatedText);
We welcome and appreciate your contributions! Share any issues or ideas you have in a new issue.
This project is licensed under the Apache-2.0 License.
Please make sure the lollms server is configured to accept CORS requests from the server serving the lollms_client_js
client. Add the host address to the allowed_origins
list in the configs/local_configs.yaml
file.
For example, if the origin is https://mydomain.com:95620
, add it there. This step ensures a smooth lollms operation.
Hey there! I am ParisNeo, a seasoned research engineer with a deep passion for Artificial Intelligence, Robotics, and Space. I enjoy creating extraordinary AI models, so feel free to join me on my Twitter @ParisNeo_AI, Discord, Sub-Reddit, and Instagram @spacenerduino