promptbench.utils.Visualizer: 'LLMModel' object has no attribute 'infer_model'
Opened this issue · 5 comments
Hi~
When I was using the visualization feature, I encountered the error "'LLMModel' object has no attribute 'infer_model'".
Here are my code and error. Could you solve it? Thanks a lot!
import promptbench as pb
model = pb.LLMModel(model="XXX", max_new_tokens=10, temperature=0.0001, device="auto", dtype="auto", system_prompt=None, model_dir="XXX")
visualizer = pb.utils.Visualizer(model)
print(visualizer.vis_by_delete(input_sentence="XXX", label="XXX"))
Traceback (most recent call last):
File "promptbench/visualize.py", line 11, in <module>
visualizer = pb.utils.Visualizer(model)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "promptbench/promptbench/utils/visualize.py", line 23, in __init__
self.model = model.infer_model.pipe
^^^^^^^^^^^^^^^^^
AttributeError: 'LLMModel' object has no attribute 'infer_model'
Hi, thank you very much for bringing this issue to our attention! This error occurred because we changed the interface of the LLMModel. It has now been fixed!
import promptbench as pb
from promptbench.utils import Visualizer
model = pb.LLMModel(model='google/flan-t5-large', max_new_tokens=10, temperature=0.0001, device='cuda')
vis = Visualizer(model)
print(vis.vis_by_grad("Please classify the emotion of this sentence as 'positive' or 'negative': I am happy today", "positive"))
The results should look like:
{'Please': 0.010260154276145421, 'classify': 0.19490253686986278, 'the': 0.0, 'emotion': 0.21427207700641568, 'of': 0.02636853801036676, 'this': 0.01337809742519713, 'sentence': 0.0215499287503591, 'as': 0.011023580069135133, "'positive'": 0.6133279867517616, 'or': 0.07718759684547498, "'negative':": 1.0, 'I': 0.0860220425865312, 'am': 0.014254274851894122, 'happy': 0.1972173406531296, 'today': 0.0886429463693424}
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select) why?
Hi, thank you very much for bringing this issue to our attention! This error occurred because we changed the interface of the LLMModel. It has now been fixed!
import promptbench as pb from promptbench.utils import Visualizer model = pb.LLMModel(model='google/flan-t5-large', max_new_tokens=10, temperature=0.0001, device='cuda') vis = Visualizer(model) print(vis.vis_by_grad("Please classify the emotion of this sentence as 'positive' or 'negative': I am happy today", "positive"))
The results should look like:
{'Please': 0.010260154276145421, 'classify': 0.19490253686986278, 'the': 0.0, 'emotion': 0.21427207700641568, 'of': 0.02636853801036676, 'this': 0.01337809742519713, 'sentence': 0.0215499287503591, 'as': 0.011023580069135133, "'positive'": 0.6133279867517616, 'or': 0.07718759684547498, "'negative':": 1.0, 'I': 0.0860220425865312, 'am': 0.014254274851894122, 'happy': 0.1972173406531296, 'today': 0.0886429463693424}
Hi, it sames not have been fixed. I use the method pip install ...
, while still getting the same error.
Hi! Could you try cloning the repository directly using git instead of installing via pip? This may be due to that we haven't updated the PyPI version yet. Thanks!
Hi! Could you try cloning the repository directly using git instead of installing via pip? This may be due to that we haven't updated the PyPI version yet. Thanks!
Thanks for your reply, that solves my confusion!