explainers-by-googlers/prompt-api

Support for guidance/structured output with prompt API

Opened this issue · 1 comments

To aid programmability, reduce compatibility risk from the API returning different results across browser, avoid challenges in updating a shipping model in the browser (Google Model V1 to Google Model V2), please consider adding techniques like guidance, structured outputs as an integral part of the prompt API.

Problem Illustration

Consider the following web developer scenarios, where a developer is:

  1. Classifying product review as the user types, to ask follow-up questions.
  2. Building a chat bot and would like to programmatically detect if a question should be routed a particular way.
  3. Building a reading comprehension assistive extension, that poses questions based on the web page content.
image

Web developers who attempt to parse the response are going to have a hard time writing code that is model/browser agnostic.

Constraining Output

One way to solve this problem is to use guidance or techniques like it. At a high level these techniques work by restricting the next allowed token from the LLM to conform to a grammar. Guidance works on top of a model, is model agnostic and only changes logits from the last layer of a model before sampling. There is an additional implementation detail within guidance in that information about all possible tokens prefixed with the next possible token is required for it to function (explanation).

With guidance (demo) we get better consistency across models and responses that are immediately parseable with JavaScript.

image

Proposal

The proposal is to add responseJsonSchema to the AIAssistantPromptOptions.

dictionary AIAssistantPromptOptions { AbortSignal signal; DomString? responseJsonSchema; };

JSON schema is familiar to web developers. However, JSON schema is a super set of what techniques like guidance can achieve today. For example, parts of the schema to enforce JSON schema constraints like dependent required cannot be enforced.
Either the API can state that only Property Name, Value Type, Enum, Arrays would be enforced, or Prompt API should validate the response with a JSON schema validator and indicate that the response is non conformant. Slight preference to the first option because of its simplicity.

Other Approaches

  • Llama.cpp supports GBNF as a grammar to restrict LLM response - link
  • Open AI recently added JSON schema support to restrict output. [Structured Outputs] (https://openai.com/index/introducing-structured-outputs-in-the-api/)
  • Exposing LogProbs and Token information from the model, I would like to acknowledge that this issue is in similar vein to #20. The difference being the consideration to make output containment a part of the API.

In general we're excited about exploring this. Minor API surface nitpicks:

  • There's no need to have it be nullable; all dictionary entries are already optional.
  • Per https://w3ctag.github.io/design-principles/#casing-rules it should be something like responseJSONSchema, not responseJsonSchema
  • I think providing JSON as a string is pretty unusual, even though I understand it makes sense theoretically. I would suggest we take it as an object and then post-process it. Probably we would do the equivalent of: JSON.stringify(providedObject) -> pass the resulting JSON string to some JSON schema library. This feels a bit roundabout but I suspect for developer ergonomics it's way better.

So to summarize: object responseJSONSchema in the dictionary.