v-bosch's Stars
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Hannibal046/Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
ivy-llc/ivy
Convert Machine Learning Code Between Frameworks
jalammar/ecco
Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).
TransformerLensOrg/TransformerLens
A library for mechanistic interpretability of GPT-style language models
google-research/disentanglement_lib
disentanglement_lib is an open-source library for research on learning disentangled representations.
alexmojaki/snoop
A powerful set of Python debugging tools, based on PySnooper
PiotrNawrot/nanoT5
Fast & Simple repository for pre-training and fine-tuning T5-style models
moshi4/pyCirclize
Circular visualization in Python (Circos Plot, Chord Diagram, Radar Chart)
greentfrapp/lucent
Lucid library adapted for PyTorch
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
srush/Transformer-Puzzles
Puzzles for exploring transformers
cosmicoptima/loom
A Loom implementation in Obsidian
ArthurConmy/Automatic-Circuit-Discovery
soniajoseph/ViT-Prisma
ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).
microsoft/otdd
Optimal Transport Dataset Distance
ahwillia/netrep
Some methods for comparing network representations in deep learning and neuroscience.
epfml/DenseFormer
yizhe-ang/interactive-transformer
A visual interface for understanding and interpreting Transformers
sompolinsky-lab/dnn-object-manifolds
rmldj/hcp-utils
Utilities to use HCP and HCP-like data with nilearn and other Python tools
nicksavarese/allora-ios
An iOS Keyboard Extension that allows for interacting with LLMs directly from any text input field. The LLM response is placed into the text field. Includes options to send clipboard contents with the request to help instruct/guide the response.
apiad/lovelaice
An AI-powered assistant for your terminal and editor
yash-srivastava19/arrakis
Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.
ArthurZucker/RecvisProject
In this project, we propose to study Vision Transformers trained using the Barlow Twins self-supervised method, and compare the results with DINO. We demonstrate the effectiveness of the Barlow Twins method by showing that networks pretrained on the small PASCAL VOC 2012 dataset are able to generalize well. Authors: Apavou Clément & Zucker Arthur
Pelk89/TF_Custom_Training_Callbacks
Tensorflow Custom Callbacks in Custom Training Loop
ghostofpokemon/oCaption
oCaption: Leveraging OpenAI's GPT-4 Vision for Advanced Image Captioning
KordingLab/clustering-units-upstream-downstream
Lange, R. D., Rolnick, D. S., and Kording, K. (2022) "Clustering units in neural networks: upstream vs downstream information." TMLR. https://openreview.net/forum?id=Euf7KofunK
KietzmannLab/BLT-pytorch-CCN23
BLT-pytorch repository for our CCN 2023 paper