TypeError: Failed to fetch dynamically imported module
Closed this issue · 4 comments
System Info
@xenova/transformers 3.0.0-alpha.0
Chrome: Version 124.0.6367.93 (Official Build) (arm64)
OS: macOS 14.4.1 (23E224)
Environment/Platform
- Website/web-app
- Browser extension
- Server-side (e.g., Node.js, Deno, Bun)
- Desktop app (e.g., Electron)
- Other (e.g., VSCode extension)
Description
I ran pnpm run dev
in the example webgpt-chat
. I can download the model on http://localhost:5173. But it's not ready for chat due to the error reported in the console:
@xenova_transformers.js?v=9e5deabe:1386 Uncaught (in promise) Error: no available backend found. ERR: [webgpu] TypeError: Failed to fetch dynamically imported module: http://localhost:5173/ort-wasm-simd-threaded.jsep.mjs
at pt (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=9e5deabe:1386:13)
at async e.create (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=9e5deabe:1906:20)
at async createInferenceSession (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=9e5deabe:9952:10)
at async constructSessions (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=9e5deabe:17531:21)
at async Promise.all (index 0)
at async Phi3ForCausalLM.from_pretrained (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=9e5deabe:17782:14)
at async AutoModelForCausalLM.from_pretrained (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=9e5deabe:20678:14)
at async Promise.all (index 1)
at async load (http://localhost:5173/src/worker.js?worker_file&type=module:137:32)
May I query if any setting required to make it work?
Btw, I can chat with the model on https://huggingface.co/spaces/Xenova/experimental-phi3-webgpu
Reproduction
- Git clone https://github.com/xenova/transformers.js.git
- Go to transformers.js/examples/webgpu-chat
- pnpm install
- pnpm run dev
- Visit http://localhost:5173 in Chrome and click
Load Model
button
I get the same issue, even when trying to pull in from a local checkout of onnx:
import { env, AutoModelForCausalLM, AutoTokenizer } from '@xenova/transformers'
env.backends.onnx.wasm.wasmPaths = '/onnxruntime-web/'
env.allowRemoteModels = false
env.allowLocalModels = true
const model_id = '../model';
const tokenizer = await AutoTokenizer.from_pretrained(model_id, {
legacy: true
})
I have copied the contents of node_modules/onnxruntime-web/dist/
to public
and it's trying to access a ort-wasm-simd-threaded.jsep.mjs
file which does not exist in onnxruntime-web
This is because the demo uses an unreleased version of onnxruntime-web v1.18.0, which I have mentioned a few times when I've linked to the source code. When it is released, I will update the source code so that it works correctly. Thanks for understanding!
This is because the demo uses an unreleased version of onnxruntime-web v1.18.0, which I have mentioned a few times when I've linked to the source code. When it is released, I will update the source code so that it works correctly. Thanks for understanding!
Thanks for the feedback. Looking forward to the release.