Issues
- 1
gradio incompatibility
#105 opened by mlinmg - 2
Error on Mac M2 24GB RAM
#90 opened by OmidH - 13
Can't run starchat: fails with `AttributeError: module 'global_vars' has no attribute 'gen_config'`
#75 opened by nathanielastudillo - 2
- 0
Can this be accessed via an OpenAI Compatible API
#102 opened by mjtechguy - 0
How to run on CPU?
#101 opened by ratan - 0
falcon 7b llm
#100 opened by NavyashreeD13 - 3
Internet search is very slow using orca mini on 4bit in google colab T4 on gradio
#98 opened by githubpradeep - 1
PingPong cant be imported
#95 opened by gavi - 1
Error on Mac
#93 opened by AndyBlocker - 0
Data Customization
#92 opened by Agbeli - 0
- 7
LLM-As-Chatbot need GPU?
#79 opened by mehrdad2000 - 4
- 1
ERROR: No matching distribution found for transformers[sentencepiece]<4.30.0,>=4.26.0
#85 opened by javierxio - 4
Can Ping Pong principle be applied to let 2 LLM Chatbots talk to each other fully automatically?
#81 opened by jbdatascience - 1
Should there be Docker instruction?
#78 opened by Dref360 - 2
- 17
- 2
Default launch gets stuck at "Loading.."
#70 opened by gavi - 1
Traceback (most recent call last): File "F:\gpt\LLM-As-Chatbot-main\menu_app.py", line 7, in <module> import global_vars File "F:\gpt\LLM-As-Chatbot-main\global_vars.py", line 2, in <module> from transformers import GenerationConfig ImportError: cannot import name 'GenerationConfig' from 'transformers' (D:\Users\Administrator\anaconda3\lib\site-packages\transformers\__init__.py)
#67 opened by 2662007798 - 2
cpu or amd gpu support
#68 opened by gmankab - 1
- 7
Use my model on local server
#64 opened by tunglambk - 14
Other LORA models
#60 opened by philwee - 3
Colab Notebook not working (and fix)
#59 opened by philwee - 1
- 1
add or not history_response
#38 opened by lelegogo26 - 0
sharing any good conversation sequences
#47 opened by deep-diver - 4
prompt format (input vs instruction).
#31 opened by wassname - 2
[Backlog] Add sparse models to options
#52 opened by claysauruswrecks - 1
Issue with code - TensorRT not found
#51 opened by DocML - 1
Error with bitsandbytes and running at all.
#37 opened by Websteria - 1
How to run it with llama-7b-hf-int4?
#25 opened by ZhUyU1997 - 1
Distillation
#27 opened by maralski - 1
- 1
- 27
bug in chatbot UI
#32 opened by GeorvityLabs - 0
- 0
- 3
- 5
- 5
match model_type: error on latest commit
#46 opened by cesarandreslopez - 3
Any plan to support gptq?
#39 opened by gaoxiao - 17
future usage error
#28 opened by suhwan-kang - 0
- 2
Can the code use beam search in Streaming Mode?
#29 opened by Facico - 2
Run offline
#26 opened by ManuXD32 - 1
Default models not working
#21 opened by calz1 - 1
Is the "chansung/alpaca-lora-7b/" model private?
#24 opened by mhmunem