Issues
- 1
[BUG: Mamba-Codestral-7B-v0.1 Internal Triton PTX codegen error: Ptx assembly aborted due to errors
#213 opened by andretisch - 9
[BUG: AssertionError: Mamba is not installed. Please install it using `pip install mamba-ssm`.
#192 opened by matbee-eth - 4
- 3
[BUG: pip install mistral_inference: ModuleNotFoundError: No module named 'torch'
#228 opened by chrisstankevitz - 0
[BUG: Using the fine-tuned Mistral-7B-v0.1 for inference, when encountering the backslash escape character '', the inference stalls, very slow, but after a few minutes, it continues generating.
#229 opened by Essence9999 - 0
Pixtral-12B tokenizer error - special_token_policy=IGNORE does not ignore special tokens in decoding
#227 opened by OmriKaduri - 0
[BUG: RuntimeError: Boolean value of Tensor with more than one value is ambiguous]
#225 opened by siwer - 0
[BUG: Cannot build on Mac M1 Silicion
#224 opened by timspannzilliz - 0
[BUG: AttributeError: module 'torch.library' has no attribute 'custom_op'
#222 opened by mruhlmannGit - 0
[BUG: RuntimeError: Couldn't instantiate class <class 'mistral_inference.args.TransformerArgs'> using init args dict_keys(['dim', 'n_layers', 'vocab_size', 'model_type'])
#221 opened by NM5035 - 0
[BUG: TypeError: generate_mamba() takes 2 positional arguments but 3 positional arguments were given
#220 opened by NM5035 - 1
- 1
- 2
Suggested improvement of eos logic in generate.py
#180 opened by vvatter - 3
JSON response format failing to retrieve clean JSON
#146 opened by serferdinand2 - 9
[BUG: Could not find consolidated.00.pth or consolidated.safetensors in Mistral model path but mistralai/Mistral-Large-Instruct-2407 surely not contains it
#205 opened by ShadowTeamCN - 6
[BUG: ModuleNotFoundError: No module named 'mistral_inference.transformer'
#202 opened by yafangwang9 - 0
[Feat] Add streaming support to Codestral Mamba
#212 opened by xNul - 0
[BUG: rate limit exceeded on basic examples
#210 opened by AlbertoMQ - 9
[BUG: ImportError: cannot import name 'Transformer' from 'mistral_inference.model' (/usr/local/lib/python3.10/dist-packages/mistral_inference/model.py)
#206 opened by rabeeqasem - 2
- 2
Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW
#144 opened by guidoveritone - 3
[BUG: ModuleNotFoundError: No module named 'triton']
#200 opened by MaxAkbar - 0
[BUG: mistralai/mamba-codestral-7B-v0.1 AttributeError: 'Mamba2' object has no attribute 'dconv'
#196 opened by s-natsubori - 2
Tokenizer skips the special tokens while decoding
#162 opened by anandsarth - 3
Missing the params.json
#140 opened by littlewwwhite - 3
!mistral-demo $7B_DIR issue
#154 opened by shaimaa0000 - 1
mistral-demo $M7B_DIR issue
#160 opened by chaima-bd - 1
- 2
- 4
Question about Mixtral MLP section
#139 opened by lhallee - 0
speed up inference?
#169 opened by xxyp - 1
Using base model on GPU with no bfloat16
#163 opened by yichen0104 - 2
PAD token missing ?
#150 opened by omkar-12bits - 2
License
#156 opened by fakerybakery - 1
os X pip install fail
#157 opened by edmondja - 0
num of training tokens?
#151 opened by wgwang - 6
How to use a prompt for text analysis?
#145 opened by rsoika - 0
- 4
I am unable to build the vLLM Container
#142 opened by AMGI-Pipeline - 1
Training code
#138 opened by sartimo - 0
Not completing answer
#143 opened by KEYURBODAR - 3
Fine Tuning Mistral 7b
#141 opened by nourolive - 1
- 2
[MISTRAL AI ERROR] Mistral AI responding with Unexpected role RoleEnum.tool error
#135 opened by muhammadfaizan027915 - 0
Mistral's tokenizer is not optimized
#134 opened by Yarflam - 0
Evaluation Pipeline
#133 opened by nikhil0360 - 0
Friendly Reminder while Generating the output
#132 opened by BadrinathMJ - 3
[Mistral 7B mistral-7b-instruct-v0.1.Q8_0.gguf] Wrong text "quoted" while presented as real
#131 opened by SINAPSA-IC - 0
"evaluation pipeline" public?
#130 opened by kijlk