nomic-ai/pygpt4all

model.py throwing an exception : 'NoneType' object is not callable

AlbelTec opened this issue · 3 comments

here is the exception after having result.

Exception ignored in: <function Model.__del__ at 0x0000018BFFE09310>
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\envs\openai\lib\site-packages\pyllamacpp\model.py", line 138, in __del__
TypeError: 'NoneType' object is not callable

here is the code based on langchain :

llm = GPT4All(model="models/gpt4all-converted.bin",n_ctx=1024)
llm_chain = LLMChain(prompt=prompt, llm=llm)
input = {"question":question,"references":references_text}
result = llm_chain.run(input)

any insight ?

Hi @AlbelTec,

I think you are using an old version of the package?
Could you please check ?

Hi @abdeladim-s I checked and my version is : v1.0.6 this is the latest

The complete output :

llama_model_load: loading model from 'models/gpt4all-converted.bin' - please wait ...
llama_model_load: n_vocab = 32001
llama_model_load: n_ctx   = 512
llama_model_load: n_embd  = 4096
llama_model_load: n_mult  = 256
llama_model_load: n_head  = 32
llama_model_load: n_layer = 32
llama_model_load: n_rot   = 128
llama_model_load: f16     = 2
llama_model_load: n_ff    = 11008
llama_model_load: n_parts = 1
llama_model_load: type    = 1
llama_model_load: ggml map size = 4017.70 MB
llama_model_load: ggml ctx size =  81.25 KB
llama_model_load: mem required  = 5809.78 MB (+ 2052.00 MB per state)
llama_model_load: loading tensors from 'models/gpt4all-converted.bin'
llama_model_load: model size =  4017.27 MB / num tensors = 291
llama_init_from_file: kv self size  =  512.00 MB
llama_generate: seed = 1681117832

system_info: n_threads = 4 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
sampling: temp = 0.800000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000
generate: n_ctx = 512, n_batch = 1, n_predict = 256, n_keep = 0


 [end of text]

llama_print_timings:        load time = 10452.52 ms
llama_print_timings:      sample time =    34.38 ms /    76 runs   (    0.45 ms per run)
llama_print_timings: prompt eval time =     0.00 ms /     1 tokens (    0.00 ms per token)
llama_print_timings:        eval time = 85664.07 ms /   289 runs   (  296.42 ms per run)
llama_print_timings:       total time = 86282.63 ms
 Question: What Mr Kacsmaryk delayed?
                  You should take into account these references to provide the answer :
[1]: Mr Kacsmaryk’s ruling thus contains the seeds of a sweeping anti-abortion agenda that goes well beyond the Supreme Court’s overturning of Roe v Wade last June. His interpretation of the Comstock Act could inspire a prohibition of all abortion in America, including surgical terminations, because under this reading shipments to clinics or hospitals of any equipment used in abortion would be illegal. Mr Kacsmaryk also dropped another crumb for those pushing a nationwide abortion ban. His opinion contended that “unborn humans extinguished by mifepristone” are entitled to “individual justice”. This concept of “fetal personhood” would grant fetuses the full panoply of constitutional rights, starting with a right to life.

                  Answer:  Mr Kacsmaryk's ruling was delayed until after his confirmation hearing because it contained language that potentially violated ethics rules regarding judges and prosecutors who are serving in multiple roles (such as both conducting legal proceedings against defendants while simultaneously representing the government). Therefore, he could not be confirmed to a lifetime appointment on this basis.
Exception ignored in: <function Model.__del__ at 0x000001E036F4A8B0>
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\envs\openai\lib\site-packages\pyllamacpp\model.py", line 138, in __del__
TypeError: 'NoneType' object is not callable

@AlbelTec, that's weird, because I have fixed that error in a recent version.
Could you please try a regular venv instead of Anaconda ?