This repository hosts a collection of nodes developed for ComfyUI. It aims to share useful components that enhance the functionality of ComfyUI projects.
You may need to manually install the requirements. They should be listed in requirements.txt
You may need to install the following libraries using pip install XXX
:
configparser
groq
transformers
torch
This node is adapted and enhanced from the Save Text File node found in the YMC GitHub ymc-node-suite-comfyui pack. It was modified to output a file for easier usability. The output pin now includes the input text along with a delimiter and a padded number, offering a versatile solution for file naming and automatic text file generation for captions.
This node downloads an image from an URL and lets you use it.
It also outputs the Width/Height of the image.
- By default, it will save the image to the /input directory.
- Clear the
save_path
line to prevent saving the image (it will still be saved in the TEMP-folder).
- Clear the
- If you enter a name in the
save_file_name_override
section, the file will be saved with this name.- You can enter or ignore the file extension.
- If you enter one, it will rename the file to the chosen extension without converting the image.
- Supported image formats: JPG, JPEG, PNG, WEBP.
- Does not support saving with transparency.
This node makes an API call to groq, and returns the response in text format.
You need to manually enter your groq API key into the GroqConfig.ini
file.
Currently, the Groq API can be used for free, with very friendly and generous rate limits.
model: Choose from a drop-down one of the available models. The list need to be manually updated when they add additional models. Currently supports mixtral-8x7b-32768
, llama2-70b-4096
and gemma-7b-it
.
preset: This is a dropdown with a few preset prompts, the user's own presets, or the option to use a fully custom prompt. See examples and presets below.
system_message: The system message to send to the API. This is only used with the Use [system_message] and [user_input]
option in the preset list. The other presets provide their own system message.
user_input: This is used with the Use [system_message] and [user_input]
, but can also be used with presets. In the system message, just mention the USER to refer to this input field. See the presets for examples.
temperature: Controls the randomness of the response. A higher temperature leads to more varied responses.
max_tokens: The maximum number of tokens that the model can process in a single response. Limits can be found here.
top_p: The threshold for the most probable next token to use. Higher values result in more predictable results.
seed: Random seed. Change the control_after_generate
option below if you want to re-use the seed or get a new generation each time.
control_after_generate: Standard comfy seed controls. Set it to fixed
or randomize
based on your needs.
stop: Enter a word or stopping sequence which will terminate the AI's output. The string itself will not be returned.
- Note:
stop
is not compatible withjson_mode
.
json_mode: If enabled, the model will output the result in JSON format.
-
Note: You must include a description of the desired JSON format in the system message. See the examples below.
-
Note:
json_mode
is not compatible withstop
.
The following presets can be found in the \nodes\groq\DefaultPrompts.json
file. They can be edited, but it's better to copy the presets to the UserPrompts.json
-file.
This preset, (default), means that the next two fields are fully utilized. Manually enter the instruction to the AI in the system_message
field, and if you have any specific requests in the user_input
field. Combined they make up the complete instruction to the LLM. Sometimes a system message is enough, and inside the system message you could even refer to the contents of the user input.
This is a tailored instruction that will return a randomized Stable Diffusion-like prompt. If you enter some text in the user_input
area, you should get a prompt about this subject. You can also leave it empty and it will create its own examples based on the underlying prompt.
You should get better result from providing it with a short sentence to start it off.
This will return a negative prompt which intends to be used together with the user_input
string to complement it and enhance a resulting image.
This will return a list format of 10 subjects for an image, described in a simple and short style. These work good as user_input
for the Generate a prompt about [user_input]
preset.
You should also manually turn on json_mode
when using this prompt. You should get a stable json formatted output from it in a similar style to the Generate a prompt about [user_input]
above.
Note: You can actually use the entire result (JSON and all), as your prompt. Stable Diffusion seem to handle it quite fine.
Edit the \nodes\groq\UserPrompts.json
file to create your own presets.
Follow the existing structure and look at the DefaultPrompts.json
for examples.
Caution
This node is highly experimental, and does not produce any useful result right now. It also requires you to download a specially trained model for it. It's just not worth the effort. It's mostly here to share a work in progress project.
This node utilizes a GPT-2 text inference model to generate a negative prompt that is supposed to enhance the aspects of the positive prompt.
Important
Installation Step: Download the weights.pt file from the project's Hugging Face repository.
Place the weights.pt
file in the following directory of your ComfyUI setup without renaming it:
\ComfyUI\custom_nodes\ComfyUI-mnemic-nodes\nodes\negativeprompt
The directory should resemble the following structure:
For additional information, please visit the project's GitHub page.