inductive-biases

There are 13 repositories under inductive-biases topic.

  • shikhartuli/cnn_txf_bias

    [CogSci'21] Study of human inductive biases in CNNs and Transformers.

    Language:Jupyter Notebook42603
  • sayakpaul/deit-tf

    Includes PyTorch -> Keras model porting code for DeiT models with fine-tuning and inference notebooks.

    Language:Jupyter Notebook41217
  • dalab/matrix-manifolds

    Source code for the "Computationally Tractable Riemannian Manifolds for Graph Embeddings" paper

    Language:Python35512
  • tkasarla/max-separation-as-inductive-bias

    Github code for the paper Maximum Class Separation as Inductive Bias in One Matrix. Arxiv link: https://arxiv.org/abs/2206.08704

    Language:Python27552
  • rfeinman/learning-to-learn

    Code for "Learning Inductive Biases with Simple Neural Networks" (Feinman & Lake, 2018).

    Language:Python21506
  • cambridgeltl/ECNMT

    Emergent Communication Pretraining for Few-Shot Machine Translation

    Language:Python13633
  • sayakpaul/vision-transformers-tf

    A non-exhaustive collection of vision transformer models implemented in TensorFlow.

  • christos42/inductive_bias_IE

    An Information Extraction Study: Take In Mind the Tokenization! (official repository of the paper)

    Language:Shell6101
  • mahsa91/GKD-MICCAI2021

    Implementation code of GKD: Semi-supervised Graph Knowledge Distillation for Graph-Independent Inference accepted by Medical Image Computing and Computer Assisted Interventions (MICCAI 2021)

    Language:Python5100
  • NeurAI-Lab/InBiaseD

    This is the official code for CoLLAs 2022 paper, "InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness"

    Language:Python4000
  • vahidzee/nads

    Utility repository for the processing and visualizing NADs of arbitrary PyTorch models

    Language:Python3110
  • zdxdsw/inductive_counting_with_LMs

    This work provides extensive empirical results on training LMs to count. We find that while traditional RNNs trivially achieve inductive counting, Transformers have to rely on positional embeddings to count out-of-domain. Modern RNNs (e.g. rwkv, mamba) also largely underperform traditional RNNs in generalizing counting inductively.

    Language:Jupyter Notebook3100
  • FieteLab/Exact-Inductive-Bias

    Towards Exact Computation of Inductive Bias (IJCAI 2024)

    Language:Python140