huggingface/transformers.js
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
JavaScriptApache-2.0
Issues
- 1
- 1
Will these mistakes have an impact?
#994 opened by aidscooler - 0
Transformer not working in js
#1021 opened by djaffer - 8
- 2
- 4
Failed when using custom model (onnx)
#1018 opened by limseung1 - 1
Module not found: Can't resolve './' in Next.js
#980 opened by weekenchen - 4
Build from source instructions
#1013 opened by pdufour - 2
Whisper Error: no available backend found
#1006 opened by stinoga - 1
- 1
Point in file import
#1008 opened by manalejandro - 0
`num_return_sequences` not working
#1007 opened by songkeys - 2
[WebGPU] zero-shot-classification model Xenova/nli-deberta-v3-xsmall not accelerated by WebGPU
#955 opened by martin-ada-adam - 0
- 0
Error while converting LLama-3.1:8b to ONNX
#1000 opened by charlesbvll - 3
Add support for moonshine ASR models
#990 opened by bil-ash - 0
Download ort-wasm-simd-threaded.jsep.wasm slow when running phi-3.5-webgpu sample in browser
#999 opened by crackleeeessyp - 1
Cannot import PretrainedModelOptions (or quantization data types) in typescript
#998 opened by jens-ghc - 3
Create a lightweight Tokenizers.js package
#982 opened by r4ghu - 4
- 4
- 3
[v3.x] Cannot load whisper-v3-large-turbo
#989 opened by liuhuapiaoyuan - 3
- 4
Aw, Snap! Crash in Chrome Using Whisper
#988 opened by stinoga - 3
- 0
Feature Request: on transformerJS model download, check if previous download is up-to-date, and allow offline use.
#992 opened by hpssjellis - 0
- 2
Support for model2vec
#970 opened by do-me - 0
- 2
Supporting Multiple Pipelines?
#975 opened by kelayamatoz - 11
Can't create a session (local model)
#979 opened by djannot - 0
- 0
- 19
Zombies in memory - something is blocking (re)loading of Whisper after a page is closed and re-opened
#958 opened by flatsiedatsie - 4
It's ready
#968 opened by flatsiedatsie - 0
nvidia/canary-1b
#977 opened by pdufour - 1
I would like to help
#973 opened by cyberluke - 1
- 6
Exported onnx flan-t5-small behavior is incorrect in tensorflow.js but ok in python
#969 opened by rolf-moz - 1
Llama 3.2 conversion error - onnx.ModelProto exceeds maximum protobuf size of 2GB: 4943971428
#967 opened by jonas-elias - 1
- 2
Confusing Error Message When Calling `apply_chat_template` Without `chat_template` Configuration
#964 opened by zhzLuke96 - 4
- 7
Failed to encode text with T5's tokenizer
#959 opened by zcbenz - 11
- 3
TypeError: e.split is not a function
#950 opened by flatsiedatsie - 0
RangeError: Array buffer allocation failed
#952 opened by flatsiedatsie - 0
`length_penalty` not implemented
#951 opened by tarekziade - 10
Error: Could not locate file (500 error)
#944 opened by iamhenry - 1