Pinned Repositories
ACL_WASSA
Code for WASSA'22 shared task.
arxiv-switch
continual_vqa
cookie
A see-through automatic differentiation library.
imgnet_preproc
Preprocessing utils for ImageNet-1k
keras_build_test
A build notebook for Keras.
mitigating_bias
noob_speedrun
regnets_trainer
regnety
Implementation of RegNetY in TensorFlow 2
AdityaKane2001's Repositories
AdityaKane2001/AdityaKane2001.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
AdityaKane2001/clox
AdityaKane2001/xai-ood
AdityaKane2001/cookie
A see-through automatic differentiation library.
AdityaKane2001/AdityaKane2001
AdityaKane2001/DiM
Distilling dataset into generative models
AdityaKane2001/DomainBed
DomainBed is a suite to test domain generalization algorithms
AdityaKane2001/eml-proj
AdityaKane2001/eva
Database system for building simpler and faster AI-powered applications
AdityaKane2001/examples
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
AdityaKane2001/expense-manager
AdityaKane2001/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
AdityaKane2001/fastgrad
AdityaKane2001/group44
AdityaKane2001/group44-project
AdityaKane2001/jlox
AdityaKane2001/learn-assembly
AdityaKane2001/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
AdityaKane2001/NAT
[CVPR 2023] Neighborhood Attention Transformer and [arXiv] Dilated Neighborhood Attention Transformer repository.
AdityaKane2001/NATTEN
Neighborhood Attention Extension. Bringing attention to a neighborhood near you!
AdityaKane2001/py-cpp-bind
AdityaKane2001/pyinstall
AdityaKane2001/python-snippets
AdityaKane2001/pytorch_resnet_cifar10
Proper implementation of ResNet-s for CIFAR10/100 in pytorch that matches description of the original paper.
AdityaKane2001/robustness
A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.
AdityaKane2001/sem8
SPPU Computer BE-Sem 2 Assignments
AdityaKane2001/ToMe
A method to increase the speed and lower the memory footprint of existing vision transformers.
AdityaKane2001/tomesd
Speed up Stable Diffusion with this one simple trick!
AdityaKane2001/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
AdityaKane2001/Video-LLaVA
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection