nomic-ai/pygpt4all

Calling `_pyllamacpp.llama_eval` raises TypeError

isLouisHsu opened this issue · 1 comments

I'm trying to reimplement llama_generate function in Python to add a stop token detection (issue #21 and #63).
However, it raises a TypeError when I try to call the function pp.llama_eval:

>>> pp.llama_eval(self.ctx, embd, len(embd), n_past, params.n_threads)
# where `embed` is a list of token ids, e.g., [1, 835, 2799, 4080, 29901, 29871].
TypeError: llama_eval(): incompatible function arguments. The following argument types are supported:
    1. (arg0: _pyllamacpp.llama_context, arg1: int, arg2: int, arg3: int, arg4: int) -> int

Invoked with: <_pyllamacpp.llama_context object at 0x7f295e62b830>, [1, 835, 2799, 4080, 29901, 29871], 6, 0, 4

It shows that arg1(tokens) is supposed to be an interger, while it is a C pointer (const llama_token *) in function llama_eval_wrapper.

Hi @isLouisHsu,

Thank you for your attempt. But I have already implemented it.
llama_eval takes a C pointer originally, the llama_eval_wrapper will take a list or array and then it will be converted to C pointer.
Please check how I did it here.
Thanks again.