liyucheng09/Selective_Context

The compression results in the opposite prompt

FabricioTeran opened this issue · 1 comments

I've tried this prompt:
"Make me a poem by Edgar Allan Poe that talks about the book "The Murders in the Rue Morgue" and adapts it as if it were made in 2023, that is, with modern characters and not ancient ones. I would also like you to use a Colombian vocabulary to express the phrases, that the place is set in the Canary Islands, the text should not be more than 300 characters long and there should not be any words with intonation in the last syllable."

The resulted compressed text with token in 0.5 english:
"Make me poem by Edgar that talks book " Murd in Rue adapt if made 2023 that with modern characters not ancientI would also you use Colombian vocabulary express phrases, that the place set Canary the text should more 300 there words with int in last syll"

The resulted text indicates that the text should be more than 300 (doesn't specify chars) and there are words with int(intonation) at last syll(syllabe). It's the opposite of what i specify in the original prompt.... I think it is ignoring the (not be) behind the "should" word

im using the "May 6 2023" demo on hugging face

I see. This could happen as we're using a smaller model to build selective context.

I would suggest to use a larger model, or try to exclude detailed instruction from the context to compress. To avoid critical instructional info loss.