Atome-FE/llama-node
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
RustApache-2.0
Issues
- 0
What files are compatible?
#124 opened by bedcoding - 1
Alternative?
#123 opened by linonetwo - 5
napi: not found on debian bullseye
#118 opened by loretoparisi - 5
- 0
GGUF support?
#122 opened by nildotdev - 16
cannot use RWKV models
#121 opened by rozek - 0
Instructions for use with Electron
#120 opened by bishwenduk029 - 2
Can't run example on llama-2-13b-chat q4_0
#116 opened by gioragutt - 11
GPU version build not using GPU
#114 opened by dspasyuk - 3
- 0
Time printings are gone
#119 opened by CodeJjang - 0
The requested module 'llama-node/dist/llm/llama-cpp.js' does not provide an export named 'LoadConfig'
#117 opened by heaversm - 0
Support for passing grammars
#115 opened by arthurwolf - 1
Discord Access
#107 opened by HolmesDomain - 1
Segmentation fault
#113 opened by ZGltYQ - 0
- 3
Code only using 4 CPU, when I have 16 CPU
#69 opened by gaurav-cointab - 12
foreign exception error
#76 opened by ralyodio - 1
Llama2 quantized q5_1
#108 opened by HolmesDomain - 3
Unable to load latest GGML models using llama.cpp after latest quantisation changes
#60 opened by dev-bre - 0
- 0
Use custom tokenizer
#105 opened by linonetwo - 0
Should bring details in error message
#102 opened by linonetwo - 0
Unsupported file version 101 when loading rwkv
#101 opened by linonetwo - 1
Issue to build GPU version
#99 opened by dspasyuk - 0
Llama.cpp Typescript: Cannot find name 'LoadModel'
#91 opened by synw - 0
Support for the new k-quant methods in Llama.cpp
#95 opened by synw - 0
the software has no reaction with no errors
#94 opened by adambnn - 0
Can't run the example on MacOS M1 pro
#92 opened by greenido - 0
- 2
- 3
llama-node/llama-cpp uses more memory than standalone llama.cpp with the same parameters
#85 opened by fardjad - 4
Ggml v3 support in Llama.cpp
#84 opened by synw - 6
Error: Missing field `nGpuLayers`
#80 opened by bakiwebdev - 1
- 1
Error: Too many tokens predicted
#81 opened by dhd5076 - 0
Segmentation fault local cuda build
#78 opened by chrgeor - 3
- 0
Embeddings.js file does not work correctly
#77 opened by skirodev - 2
langchain integration
#56 opened by luca-saggese - 3
basic unified API for all backends
#55 opened by end-me-please - 6
app crashes when input is too long
#74 opened by ralyodio - 0
getting error after second run
#71 opened by ralyodio - 1
- 4
Illegal instruction (core dumped)
#72 opened by itz-coffee - 8
req: support async/await
#68 opened by ralyodio - 5
- 2
is this supposed to have a gpu to run?
#64 opened by ralyodio - 8
- 1