openai/automated-interpretability

About the 'logprobs' in the response object

Opened this issue · 1 comments

Hi,

I find that the simulator will postprocess the response of 'text-davinci-003' using the response field 'logprobs'.

However, as I read the document of openai, the field 'logprobs' is going to be deprecated since the completion response object will be replaced by chat completion object, and also the model 'text-davince-003' is going to be deprecated.

I now have the access to the gpt-4 and gpt-3.5-turbo with chat completion response object. Is there any way to conduct the neuron-explainer using the two models? i.e. without the field 'logprobs'.

Or, is it necessary to call 'text-davince-003' and other models with completion response object that return 'logprobs'.

Thanks a lot!