kl-divergence
There are 46 repositories under kl-divergence topic.
yoyolicoris/pytorch-NMF
A pytorch package for non-negative matrix factorization.
jhoon-oh/kd_data
IJCAI 2021, "Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation"
Khamies/LSTM-Variational-AutoEncoder
A PyTorch Implementation of Generating Sentences from a Continuous Space by Bowman et al. 2015.
zheng-yanan/techniques-for-kl-vanishing
This repository summarizes techniques for KL divergence vanishing problem.
akshaykhadse/reinforcement-learning
Implementations of basic concepts dealt under the Reinforcement Learning umbrella. This project is collection of assignments in CS747: Foundations of Intelligent and Learning Agents (Autumn 2017) at IIT Bombay
adamwespiser/variational-autoencoders
Experiments with variational autoencoders in Julia
SafeRoboticsLab/KLGame
Repository for "Blending Data-Driven Priors in Dynamic Games" - RSS 2024
choderalab/integrator-benchmark
Code for enumerating and evaluating numerical methods for Langevin dynamics using near-equilibrium estimates of the KL-divergence. Accompanies https://doi.org/10.3390/e20050318
nocotan/geodesical_skew_divergence
PyTorch implementation of α-geodesical skew divergence
rochitasundar/Generative-AI-with-Large-Language-Models
This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
wecarsoniv/beta-divergence-metrics
PyTorch implementations of the beta divergence loss.
yuancoder222/KL-loss
KL-loss
ogunlao/foundations_of_ml
Machine Learning algorithms built from scratch for AMMI Machine Learning course
anishLearnsToCode/kl-divergence-images
Relative entropy, mutual information, KL divergence of 2 given Images 🖼
harsh306/GAN
Basic GANs with variety of loss functions as an exercise for my Thesis with Prof. Randy Paffenroth. KL, Reverse-KL, JS and Wasserstein GAN.
kyosek/change-point-detection-kl-divergence
Change point detection using KL divergence
HolmesShuan/Bias-Variance-Decomposition-for-KL-Divergence
This repository includes some detailed proofs of "Bias Variance Decomposition for KL Divergence".
mark-antal-csizmadia/variational-inference-gmm
Coordinate ascent mean-field variational inference (CAVI) using the evidence lower bound (ELBO) to iteratively perform the optimal variational factor distribution parameter updates for clustering.
Moozzaart23/PlagiarismChecker
Implementation of KL Divergence and inverted vector model for plagiarism detection in text files
cadmiumcr/summarizer
A collection of summarizer algorithms
curvysquare/PPO-and-A2C-for-HULH-poker
My MSc project on applying, tuning and modifying the PPO and A2C algorithms to Pettingzoo MARL library two player poker game
aneeshdurai/Entropy-Based-Techniques-in-Generative-Language-Models
In this project, we explore how we can use entropy and information in language models and how we can optimize it for generative tasks.
chandnii7/Natural-Language-Processing
NLP implementations like information-theoretic measures of distributional similarity, text preprocessing using shell commands, Naive Bayes text categorization model, Cocke-Younger-Kasami parsing.
donlapark/Dirichlet-Mechanism
The Dirichlet Mechanism for Differentially Private KL Divergence Minimization
junlulocky/infopy
Python information theory computation
mark-antal-csizmadia/DD2434-VAE-Project
Replication of the research paper titled Auto-Encoding Variational Bayes.
mary-python/priestkldiv
Comparison of several approaches for the PRIvate ESTimation of KL-Divergence (PRIEST-KLD)
wasimusu/Paraphrase-Detection
Implementing various measures of paraphrase detection on Microsoft Paraphrase Corpus and checking their performance on original high dimension TF-IDF matrix and it's low dimension approximation
edwarddramirez/svi-dist-fit
Novel technique to fit a target distribution with a class of distributions using SVI (via NumPyro). Unlike standard SVI, our "data" is a distribution rather than a finite collection of samples.
qiqinyi/GenAI-with-LLMs
My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.
ron-taieb/SchedNoise-Diffusion
Implementation of diffusion models with varying noise distributions (Gaussian, GMM, Gamma) and scheduling techniques (cosine, sigmoid) to assess generative performance using KL divergence and dynamic scheduling approaches.
SatvikVarshney/IsingModelBoltzmannMachine
Using Monte-Carlo simulated datasets, a completely transparent Boltzmann Machine trained on 1-D Ising chain data is implemented to predict model couplers in the absence of past coupler values. Methods from machine learning applied to theoretical physics are on display in this work.
antonio-f/kl-divergence
Kullback-Leibler divergence in Python
ErfanMomeniii/Variational-Autoencoder-on-MNIST
Generating gray scale images of numerical digit by using variational autoencoder
satyampurwar/large-language-models
Unlocking the Power of Generative AI: In-Context Learning, Instruction Fine-Tuning and Reinforcement Learning Fine-Tuning.