avoid expensive initialization
Opened this issue · 5 comments
Hello,
I'm using the following:
import { encode, isWithinTokenLimit } from 'gpt-tokenizer/model/text-davinci-003';
which seems to slow down the initialization, enough that I can't deploy to cloudflare workers with this library. Is there a way to lazily initialize things?
We're experincing this as well -- requiring this package takes ~600ms on my M1 MBP:
❯ time node -r gpt-tokenizer -e "1"
________________________________________________________
Executed in 548.82 millis fish external
usr time 616.81 millis 4.71 millis 612.10 millis
sys time 99.25 millis 9.21 millis 90.04 millis
Would it be hard to lazily require the encodings only once the first encode
call is made?
Same issue on my end.
For Cloudflare Workers I suggest you look at this:
https://github.com/dqbd/tiktoken#cloudflare-workers
To get around the 400ms startup time limit of Cloudflare Workers, I just import the library within fetch.
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const { encode } = await import('gpt-tokenizer');
// ....
}
}
Regarding another suggested library by @thdoan, I couldn't get tiktoken
or js-tiktoken
to work within the limits of Cloudflare Workers.
The js-tiktoken
bundles all the encoders, so this makes the bundle larger than the 1mb limit of the Cloudflare Worker (see here).
And tiktoken/lite
, which allows you to import only the necessary encoder, which makes it within the size <= 1mb, has a bug that has not yet been fixed.
When designing the decision was made to make it possible for the tokenizer loadable synchronously.
The large startup time is likely because of the large file containing the encodings and the base64 parsing that needs to happen after the load.
You could try to experiment with enabling v8's code cache introduced in node 22.1.0. It should start much faster with it enabled. Here's more info about this.
We could also experiment with an alternative way of storing the encodings so that parsing is much simpler/easier on the resources. Would need to profile and see what is causing the bulk of the startup time right now.
Suggestions and PRs welcome, as I'm constrained on time right now.