svilupp/PromptingTools.jl

Idea for templates

MilesCranmer opened this issue · 1 comments

Hey @svilupp,

Awesome package; thanks for making it!

I just wanted to share an idea for how prompt building could work in an idiomatic and generic way in Julia.

One thing that has been quite successful in Plots.jl is how you can easily describe default plotting recipes for different data structures using RecipesBase.jl (https://docs.juliaplots.org/stable/RecipesBase/syntax/), so that a user can basically just call plot(obj) on some complicated object and get an appropriate plot as output, without needing to extract the right series manually.

For example, for a custom type Result type defined in a library, with data in (::Result).x and (::Result).y, a package developer might define:

@recipe function f(r::Result; ε_max = 0.5)
    # set a default value for an attribute with `-->`
    xlabel --> "x"
    yguide --> "y"
    markershape --> :diamond
    # add a series for an error band
    @series begin
        # force an argument with `:=`
        seriestype := :path
        # ignore series in legend and color cycling
        primary := false
        linecolor := nothing
        fillcolor := :lightgray
        fillalpha := 0.5
        fillrange := r.y .- r.ε
        # ensure no markers are shown for the error band
        markershape := :none
        # return series data
        r.x, r.y .+ r.ε
    end
    # get the seriescolor passed by the user
    c = get(plotattributes, :seriescolor, :auto)
    # highlight big errors, otherwise use the user-defined color
    markercolor := ifelse.(r.ε .> ε_max, :red, c)
    # return data
    r.x, r.y
end

This basically sets up various plotting options by default, and finally returns r.x, r.y which would be passed to the corresponding @recipe defined for those types. This might just be ::Array, ::Array, and thus go into the normal plotting function.

I am wondering if you might be able to do something similar here, for prompting on different types of structures.

For example, perhaps you could define a @promptrecipe macro that a user would evoke like:

@promptrecipe function f(d::MyType)
    @promptchild begin
        mapreduce --> true  # By default, we summarize this with an LLM mapreduce *before* adding to the main prompt
        
        d.params  # To be expanded by other `promptrecipe` and added to prompt
    end
    
    @promptchild begin
        d.id  # Would get forwarded to a prompt recipe for UInt64 which simply turns it into a string
    end
    
    @promptchild begin
        formatting --> :json  # Request specific serialization
        
        d.value
    end

    extraction --> true  #  Extract information with an LLM before putting into the final prompt
    extract --> Dict("key1" => String, "key2" => Bool)
    s = string(d.data)

    "The contents of MyType are: ```\n" * s * "\n```",  # Add to extraction prompt 
end

which would then get concatenated with the rest of the prompt or something. And the children would be recursively expanded and added to the prompt.

The goal of this would both be to enable langhchain-like recipes for different tasks (mapreduce, generation, structured output, etc), and also make it easier for users to pass arbitrary data to a language model and have it simply get serialized using library-defined recipes.

This is a pretty rough idea right now but just wanted to share it in case this might inspire the syntax when you get to work on templates! I think it requires a bit more careful thinking for sure.

There is definitely a very Julian way to handle things here which would give a unique advantage over LangChain, the same way Plots.jl is much more flexible for downstream libraries compared to Matplotlib (which is basically only defined on arrays, or if the user writes a whole .plot() scheme themselves).

Hi! I'm sorry for not getting back to you sooner.

I like the idea and line of thinking! I'd be keen to explore it further, but Slack or other chats might be more suitable to ping ideas around. In general, I was always hoping someone would figure a cool DSL for working with LLMs :)

I'll check out Recipes - I've never explored it.
Do you have any other examples of packages/patterns/DSLs work exploring? I've looked at JuMP, Turing, Dagger and Chain. From this set, the @chain macro personally resonated the most (because many ppl use it and it has "DAG semantics" while still feeling like piping).

I think there are two use cases for a DSL:
A) Generating "prompts", ie, the text, often containing instructions, provided to a model in a single turn of conversation (= Instructions)
B) Defining "chains", ie, a sequence of steps (or a DAG) to achieve some bigger task (=Task flow)
There is a C) where your single-message "prompt" gets updated and mutated as it's passed around, but that can be achieved via B)

I think your example recipe would belong to B), which is focused on how to execute multiple LLM calls to achieve some task? Or were you looking to build just the instruction set for single-turn extraction?

All my thoughts below are driven by my thinking of LLM as a way to buy back time and augment my intelligence, ie, if it takes too long to learn or too long to define, it's not worth it (and you should use chat interface or do it "the old way").

Re A) Recipes for defining instructions
I don't like writing all this "prose" and it feels like it's easy to iterate and evolve with a single model/provider, but they never work as well for other models/providers.

The challenges I see:

  • Whatever we predefine will work only for some tasks somewhere
  • It's outdated with each new model release and fine-tune
  • There are still no proper rules to be too principled in composing prompts
  • You get the best results by writing a specific prompt for each template (but it might not be "worth it")

So I prefer having a bunch of "sub-optimal" but fully baked-out prompts that I can just replace a placeholder in.

Would you mind sharing some use cases / tasks where you could see value in defining prompt recipes / principled composition?

Btw. if you're interested in this topic, I liked this survey: https://arxiv.org/pdf/2312.16171.pdf and I'm watching DSPy to see if we can be more declarative in how we use LLMs (but again, it fails my "practicality" test.)

Re B) Defining Task Flow
I wanted to define a DSL for Agentic behavior for the longest time, but I struggle with:

  • unclear use cases (what are people doing that is actually valuable)
  • no dominant patterns yet

So it feels like it would be too complicated and outdated before I finish it. What I opted for is to define "lazy" primitives, like AIGenerate instead of aigenerate. It should allow people to build a DAG with LLM operations with base Julia (ie, no need to learn a DSL).

Example:

mapreduce(x->AIGenerate(:MyTemplate; data=x, task="..").content,join, my_dataset)

Laziness allows to access kwargs (models, parameters, variables) and share and mutate them as needed.

Sequential chains using simple pipes:

output = AIGenerate(...) |>AICodeFixer(...) |> AIExtract(...) |> AI... |> run!

using @chain with current functions (not lazy):

# define MyType for the data structure you need
@chain begin
    aigenerate(:MyTemplate; data="..", task="..")
    _.content
    aiextract(:DetailedExpert; return_type = MyType)
    _.content
end

As always you can add sub-chains with for-loops, if-else-end, and other nested logic. Basically, we have everything we need.

Potentially, we could adjust @chain to @aichain that would just rewrite the eager AI calls to be lazy

# define MyType for the data structure you need
@aichain begin
    aigenerate(:MyTemplate; data="..", task="..") #--> translate to AIGenerate(...)
    _.content              # --> translate to LazyFunction(...)
    aiextract(:DetailedExpert; return_type = MyType) #--> translate to AIExtract(...)
    _.content               # ditto
end

Would that cover some of your use cases or do you have some specific ones that would require a different DSL?
I think until we have killer use cases and patterns for Agents (like RAG etc), it will be hard to define a perfect DSL for it. But I'm happy to be proven wrong :)

PS: I might have misunderstood but your recipe could be (mostly) built already with:

# define my_text
# define MyData struct

msg = aiextract(my_text, return_type = PT.ItemsExtract(MyData)) #returns a vector of MyData
reduce(..., msg.content) # whatever reduction is needed

EDIT: I think I misunderstood your DSL proposal. You probably intended to generate a prompt to return JSON spec of that type. What benefits do you expect from a DSL versus an out-of-the-box function that will take type and return a signature to add to a prompt (as currently implemented)?