Small example to instrument nextjs llm app. Current issue is I cannot seem to get the telemetry of the openai to Phoenix instance that I have active remotely.
You need to set the .env based on .env.template
Once you set those up, then run yarn dev
you can make a http call POST /api/completions
.
I'm trying to use the OpenAI Node SDK (Auto instrumentation).
I'm not using vercel ai sdk stuff, just basic npm OpenAI packages.
I can get the trace
(or span
? I'm not sure what the terminology is)
But this seems to not have the LLM input and outputs.