Issues
- 0
- 2
- 2
AttributeError: 'LlamaForCausalLM' object has no attribute 'transformer' when using deepseek
#154 opened by g-w1 - 4
TypeError when using CogVLM and pass scan=False
#153 opened by Zoeyyao27 - 1
- 0
[Bug] Grad Setting Malfunctioning
#141 opened by AdamBelfki3 - 0
Separate logging for remote execution
#127 opened by JadenFiotto-Kaufman - 1
Including model config in initialization call for LanguageModel causes errors when calling model.trace
#115 opened by arunasank - 0
- 0
- 2
[Feature Request] Make token indexing work for batched input and UnifiedTransformer
#138 opened by Butanium - 0
NNsight should fail when an Unset proxy from a previous trace / future computation is used in a patching experiment
#143 opened by Butanium - 0
Empty body of tracing context blows up
#142 opened by arjunguha - 5
- 2
Make tensor creation functions like torch.zeros traceable via nnsight
#140 opened by JadenFiotto-Kaufman - 2
- 0
Expand pytest tests
#133 opened by JadenFiotto-Kaufman - 0
Create tutorial for remote execution
#132 opened by JadenFiotto-Kaufman - 0
- 0
Create tutorial for future lens
#130 opened by JadenFiotto-Kaufman - 0
Create tutorial for extending nnsight
#129 opened by JadenFiotto-Kaufman - 0
Create tutorial for "performance tips"
#128 opened by JadenFiotto-Kaufman - 1
Dispatch Error When Using Quantisation
#106 opened by Xmaster6y - 0
Cannot use cache if I don't use .value
#118 opened by Butanium - 2
Cannot load transformer lens model on CPU
#121 opened by Butanium - 2
- 4
`model.named_modules` returns PyTorch modules not NNSight wrapped modules
#114 opened by neelnanda-io - 2
Is it possible to apply layer modules out-of-order?
#109 opened by arjunguha - 1
- 3
Trouble loading GPTBigCode models
#48 opened by arjunguha - 6
Trouble using the Llama2 models
#92 opened by MattBortoletto - 3
Memory leak after trace?
#102 opened by tvhong - 1
- 6
Can't run llama architectures
#83 opened by Butanium - 1
Add links to source code in documentation
#90 opened by ericwtodd - 4
- 3
NNsight bug with quantized models
#86 opened by HuFY-dev - 0
nnsight should raise exceptions when remote=True and an error happens on the server
#73 opened by Justabitwearden - 0
Allows retrieval of intervention graph from tracer and ability to se it in forward/generate
#10 opened by JadenFiotto-Kaufman - 0
When scanning using 'meta' tensors, handle torch calls to allocate new data.
#40 opened by JadenFiotto-Kaufman - 0
Check if entering a pre-created tokenizer output dict into invoke works
#3 opened by JadenFiotto-Kaufman - 2
Potential Feature Request - Simple Inference
#68 opened by ericwtodd - 0
- 0
- 1
- 2
Allow using models that rely on keyword arguments and not positional parameters, like Mistral
#29 opened by arunasank - 0
Throw error when accessing proxy's value or runners output if not set.
#11 opened by JadenFiotto-Kaufman - 0
Prevent using a custom module from momentarily doubling memory requirements
#6 opened by JadenFiotto-Kaufman - 0
Proxy torch function proxy argument not necessarily the first argument should loop through arguments to find the proxy
#4 opened by cadentj - 0
Allow using pre created models in initialization as opposed to hf id
#2 opened by JadenFiotto-Kaufman