Pinned Repositories
addons
Useful extra functionality for TensorFlow 2.x maintained by SIG-addons
alphageometry
deeplearning-video-spatiotemp
Deep learning architectures related to video ++.
fundamental_algorithms_patterns_python3
lq-backprop
TensorFlow implementation of differentiable LQ matrix decomposition for all matrix orders.
numerical_ml_algorithms_python
Python3 implementation of a few numerical algorithms.
transformer-recommender
Sequential recommendations with transformers.
transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
transformers-retrieval-ranking-nli-ECIR2021
Multilingual retrieval, ranking and natural language inference with transformers (mBERT); PyTorch implementation code for article in European Conference on Information Retrieval (ECIR2021)
smarter
next gen smart vlm reasoner
D-Roberts's Repositories
D-Roberts/lq-backprop
TensorFlow implementation of differentiable LQ matrix decomposition for all matrix orders.
D-Roberts/transformers-retrieval-ranking-nli-ECIR2021
Multilingual retrieval, ranking and natural language inference with transformers (mBERT); PyTorch implementation code for article in European Conference on Information Retrieval (ECIR2021)
D-Roberts/fundamental_algorithms_patterns_python3
D-Roberts/numerical_ml_algorithms_python
Python3 implementation of a few numerical algorithms.
D-Roberts/addons
Useful extra functionality for TensorFlow 2.x maintained by SIG-addons
D-Roberts/alphageometry
D-Roberts/champagne
An official codebase for paper ":champagne: CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos"
D-Roberts/CyCLIP
D-Roberts/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
D-Roberts/FTP
This repo hosts the code for the Fast Trainable Projection (FTP) project.
D-Roberts/llama
Inference code for LLaMA models
D-Roberts/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
D-Roberts/llmstep
llmstep: [L]LM proofstep suggestions in Lean 4.
D-Roberts/lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
D-Roberts/M3L
Official repository for "Missing Modality Robustness in Semi-Supervised Multi-Modal Semantic Segmentation"
D-Roberts/math-lm
D-Roberts/open_clip
An open source implementation of CLIP.
D-Roberts/open_flamingo
An open-source framework for training large multimodal models
D-Roberts/prismatic-vlms
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
D-Roberts/Prompt-Engineering-Guide
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
D-Roberts/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
D-Roberts/SMART
Training and testing code from our CVPR 2023 paper "Are Deep Neural Networks SMARTer than Second Graders?"
D-Roberts/stable-diffusion
Latent Text-to-Image Diffusion
D-Roberts/StableLM
StableLM: Stability AI Language Models
D-Roberts/superpoint_transformer
Official PyTorch implementation of Superpoint Transformer introduced in "Efficient 3D Semantic Segmentation with Superpoint Transformer"
D-Roberts/SwiftFormer
SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications
D-Roberts/transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
D-Roberts/vlm-evaluation
VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
D-Roberts/x-transformers
A simple but complete full-attention transformer with a set of promising experimental features from various papers
D-Roberts/diffseg
DiffSeg is an unsupervised zero-shot segmentation method using attention information from a stable-diffusion model. This repo implements the main DiffSeg algorithm and additionally includes an experimental feature to add semantic labels to the masks based on a generated caption.