taoleitian
The first-year CS Ph.D. student in UW-Madison.
University of Wisconsin–MadisonMadison, WI
Pinned Repositories
npos
source code for ICLR'23 paper "Non-parametric Outlier Synthesis"
blogImage
CDVD-TSP
The repository is an official implementation of our CVPR2020 paper : Cascaded Deep Video Deblurring Using Temporal Sharpness Prior
chain-of-thought-hub
Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
dpo
Reference implementation for DPO (Direct Preference Optimization)
dpo_wisc
KNN-OOD-Tao
The KNN-based out-of-distribution detection
LLM_uncertainty
prompt-to-prompt
weak-to-strong
taoleitian's Repositories
taoleitian/KNN-OOD-Tao
The KNN-based out-of-distribution detection
taoleitian/LLM_uncertainty
taoleitian/prompt-to-prompt
taoleitian/blogImage
taoleitian/CDVD-TSP
The repository is an official implementation of our CVPR2020 paper : Cascaded Deep Video Deblurring Using Temporal Sharpness Prior
taoleitian/chain-of-thought-hub
Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
taoleitian/dpo
Reference implementation for DPO (Direct Preference Optimization)
taoleitian/dpo_wisc
taoleitian/faiss
A library for efficient similarity search and clustering of dense vectors.
taoleitian/HellwayXue.github.io
taoleitian/KNN-OOD
taoleitian/leitian-academic
taoleitian/PFAN
Pyramid Feature Alignment Network for Video Deblurring_CVPR_ID7855
taoleitian/weak-to-strong
taoleitian/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
taoleitian/OpenRLHF
A Ray-based High-performance RLHF framework (for 7B on RTX4090 and 34B on A100)
taoleitian/RLCD
Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment
taoleitian/taoleitian.github.io
taoleitian/Transfer-Learning-Library
Transfer Learning Library for Domain Adaptation, Task Adaptation, and Domain Generalization
taoleitian/tree-of-thought-llm
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models