ikamensh opened this issue 2 years ago · 0 comments
Hi, I'm interested to try running llama models, I'm using a macbook with AMD GPU, so probably easiest would be to use CPU. Would be nice to know if it's possible from README. Thanks!