About the 'logprobs' in the response object
Opened this issue · 1 comments
HamLaertes commented
Hi,
I find that the simulator will postprocess the response of 'text-davinci-003' using the response field 'logprobs'.
However, as I read the document of openai, the field 'logprobs' is going to be deprecated since the completion response object will be replaced by chat completion object, and also the model 'text-davince-003' is going to be deprecated.
I now have the access to the gpt-4 and gpt-3.5-turbo with chat completion response object. Is there any way to conduct the neuron-explainer using the two models? i.e. without the field 'logprobs'.
Or, is it necessary to call 'text-davince-003' and other models with completion response object that return 'logprobs'.
Thanks a lot!
williamrs-openai commented
Explanation will work without logpobs but our current scoring method won't,
we will update if we have a better scoring method
…On Wed, Nov 1, 2023 at 8:56 PM Youcheng Huang ***@***.***> wrote:
Hi,
I find that the simulator will *postprocess the response of
'text-davinci-003' using the response field 'logprobs'*.
However, as I read the document of openai, *the field 'logprobs' is going
to be deprecated* since the completion response object will be replaced
by chat completion object, and also the model 'text-davince-003' is going
to be deprecated.
*I now have the access to the gpt-4 and gpt-3.5-turbo with chat completion
response object. Is there any way to conduct the neuron-explainer using the
two models? i.e. without the field 'logprobs'.*
Or, *is it necessary to call 'text-davince-003' and other models with
completion response object that return 'logprobs'.*
Thanks a lot!
—
Reply to this email directly, view it on GitHub
<#24>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ASWBKK7HMDWB3WNQBFDMQWDYCMKWPAVCNFSM6AAAAAA62HZWASVHI2DSMVQWIX3LMV43ASLTON2WKOZRHE3TGNBVHAYDANI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>