Issues
- 2
The instructions to install Llama3 is horrible
#1131 opened by Eyesun23 - 9
- 4
Your request to access this repo has been rejected by the repo's authors.
#1168 opened by LexusShabunya - 3
Your request to access this repo has been rejected by the repo's authors.
#1136 opened by wingchi-leung - 0
`403` Error from Presigned URL
#1183 opened by BlaiseMuhirwa - 1
how to use few-shot?
#1178 opened by xingchen0120 - 2
Unable to see model id
#1182 opened by ansariahmad - 0
sugerencias
#1181 opened by Brianfabero - 0
how to use few-shot?
#1179 opened by xingchen0120 - 0
how to use few-shot?
#1180 opened by xingchen0120 - 0
Llama actual
#1176 opened by LaNochePerfecta - 0
My Apple ID and password was hacked
#1175 opened by AYUSHSURENDRAN - 0
Gradio web ui for llama-3.2-vision model
#1174 opened by spacewalk01 - 0
Ollama: 500, message='Internal Server Error', url='http://host.docker.internal:11434/api/chat'
#1171 opened by zhai-hello - 3
Llama 3.1: The output text is truncated
#1153 opened by Gumichocopengin8 - 1
- 0
La educación en la Edad Media (Escolástica)
#1166 opened by Oli-1010 - 2
- 0
Remove person from background
#1161 opened by Swaran-Samantaray - 0
Escolástica
#1167 opened by Oli-1010 - 0
- 1
Unable to download meta-llama-3.1-8b-instruct
#1165 opened by Raf-sns - 0
Request on hugging face
#1164 opened by shaieesss - 0
Colons in filenames when using llama download make them incompatible with windows.
#1159 opened by Crazy-Pyro - 1
- 2
Meta-Llama-3.1-70B-Instruct does not appear to have a file named config.json
#1158 opened by jcruzer2012 - 1
Getting 400 error on https://llama3-1.llamameta.net/Meta-Llama-3.1-405B-MP8/consolidated.00.pth
#1146 opened by jwatte - 0
Meta AI issue to append duplicated messages after the answer when questions have context
#1156 opened by summerkiss - 2
A problem with tokenizer.model from HuggingFace
#1151 opened by vintoniuk - 2
Downlaod.sh is throwing 403 Foribdeen error, when using a freshly generated URL/token
#1145 opened by eladkolet - 0
Do you open source of your internal evaluations library ?
#1155 opened by KeyKy - 1
How to to run Meta-Llama-3.1-70B-Instruct on the MATH TEST
#1148 opened by huaxiaohua - 0
Access to checkpoints
#1154 opened by shruum - 0
- 3
HTTP request sent, awaiting response... 403 Forbidden 2024-06-26 11:19:31 ERROR 403: Forbidden.
#1133 opened by arhansuba - 0
AnT1nG-Meta-llana
#1149 opened by ReyBan82 - 2
Download.sh does nothing
#1142 opened by ARandomMammoth - 0
No special tokens added in tokenizer
#1144 opened by manoja328 - 2
"$CPU_ARCH" not found
#1140 opened by hyungupark - 1
Unable to download LLAMA models from https://llama.meta.com/llama-downloads
#1141 opened by amaanaijazsheikh - 3
Not getting access to Llama2 and Llama3
#1134 opened by ahmedivy - 1
- 2
Research dedicated license?
#1137 opened by protossw512 - 0
Torch Error
#1132 opened by Jufyer - 0
How to infer answer using llama2-7b-hf?
#1130 opened by Jerry-hyl - 0
Oddities downloading the 8b-instruct model
#1129 opened by ppbrown - 0
LLaMA3 supports an 8K token context length. When continuously pretraining with proprietary data, the majority of the text data is significantly shorter than 8K tokens, resulting in a substantial amount of padding. To enhance training efficiency and effectiveness, it is necessary to merge multiple short texts into a longer text, with the length remaining below 8K tokens. However, the question arises: how should these short texts be combined into a single training sequence? Should they be separated by delimiters, or should an approach involving masking be used during the pretraining process?
#1128 opened by Karliz24 - 0
[Parallel MD5] Accelerating `download.sh`
#1127 opened by DEKHTIARJonathan - 0
Unable to access the Hugging Face Llama-3 model repo
#1126 opened by Dounx - 0