Issues
- 2
Installation Problem
#124 opened by jahbini - 4
Question about Mixtral MLP section
#139 opened by lhallee - 2
PAD token missing ?
#150 opened by omkar-12bits - 0
mistral-demo $M7B_DIR issue
#160 opened by chaima-bd - 2
License
#156 opened by fakerybakery - 1
os X pip install fail
#157 opened by edmondja - 2
!mistral-demo $7B_DIR issue
#154 opened by shaimaa0000 - 0
num of training tokens?
#151 opened by wgwang - 6
How to use a prompt for text analysis?
#145 opened by rsoika - 0
- 0
- 0
- 0
JSON response format failing to retrieve clean JSON
#146 opened by serferdinand2 - 1
Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW
#144 opened by guidoveritone - 4
I am unable to build the vLLM Container
#142 opened by AMGI-Pipeline - 2
Missing the params.json
#140 opened by littlewwwhite - 1
Training code
#138 opened by sartimo - 0
Not completing answer
#143 opened by KEYURBODAR - 1
- 3
Fine Tuning Mistral 7b
#141 opened by nourolive - 1
- 1
Gate is Linear Layer?!?!
#112 opened by Eran-BA - 2
[MISTRAL AI ERROR] Mistral AI responding with Unexpected role RoleEnum.tool error
#135 opened by muhammadfaizan027915 - 0
Mistral's tokenizer is not optimized
#134 opened by Yarflam - 0
Evaluation Pipeline
#133 opened by nikhil0360 - 0
Friendly Reminder while Generating the output
#132 opened by BadrinathMJ - 3
[Mistral 7B mistral-7b-instruct-v0.1.Q8_0.gguf] Wrong text "quoted" while presented as real
#131 opened by SINAPSA-IC - 0
"evaluation pipeline" public?
#130 opened by kijlk - 0
Mistral 7B v0.1 does not support optimum BetterTransformers for better and optimized Inference
#128 opened by KaifAhmad1 - 0
- 0
(question) moe for conversations
#125 opened by Tom-Neverwinter - 4
vLLM Build Issue using the provided Dockerfile
#99 opened by Good-Coffee - 1
Parameter for returning `logprobs`
#108 opened by StatsGary - 0
- 2
Error while running tutorial: TypeError: 'mmap' is an invalid keyword argument for Unpickler()
#119 opened by aurotripathy - 0
BUG: API /completion endpoint returns 500 (server error) when sending "max_token" = 1
#122 opened by MrXavier - 0
Is this architeture same as Mixtral-7x8B model?
#121 opened by HuangJi1019 - 0
- 0
Mixtral sliding window
#118 opened by tuyaao - 2
Cannot download latest image
#117 opened by louispaulet - 0
Support for Python code generation
#116 opened by kavyanshpandey - 0
which model to use for what's the root of 256256?
#109 opened by dcasota - 0
#feature request# rope_scalling supprot
#115 opened by Xingxiangrui - 0
- 0
Local embeddings model usage
#111 opened by frankiedrake - 0
- 0
Non Latin Language support?
#107 opened by ican24 - 0
What is the best way for the inference process in LORA in PEFT approach
#103 opened by pradeepdev-1995 - 0
Mistral input context length limitation
#102 opened by DanYoto - 0