Pinned Repositories
esper
ESPER
iab_practice_example
slurm-monitor
tapm
trex
turm_gpu
VisArgs
Corpus to accompany: "Selective Vision is the Challenge for Visual Reasoning: A Benchmark for Visual Argument Understanding"
vlis
vtt_challenge_2019
vtt_qa_pipeline
video QA pipeline and baseline
JiwanChung's Repositories
JiwanChung/vlis
JiwanChung/esper
ESPER
JiwanChung/tapm
JiwanChung/vtt_challenge_2019
JiwanChung/VisArgs
Corpus to accompany: "Selective Vision is the Challenge for Visual Reasoning: A Benchmark for Visual Argument Understanding"
JiwanChung/turm_gpu
JiwanChung/slurm-monitor
JiwanChung/trex
JiwanChung/vtt_qa_pipeline
video QA pipeline and baseline
JiwanChung/iab_practice_example
JiwanChung/long-story-short
JiwanChung/mass_aaai
JiwanChung/rgoto
JiwanChung/acav-private.github.io
JiwanChung/acav100m
ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning. In ICCV, 2021.
JiwanChung/alignment-handbook
Robust recipes to align language models with human and AI preferences
JiwanChung/CLIP_JAX
Contrastive Language-Image Pretraining
JiwanChung/deeplearning_pytorch_tutorial
JiwanChung/dotfiles_public
JiwanChung/einops
Deep learning operations reinvented (for pytorch, tensorflow, jax and others)
JiwanChung/jiwan
JiwanChung/jiwan_chung
JiwanChung/JiwanChung.github.io
JiwanChung/LM_Memorization
Training data extraction on GPT-2
JiwanChung/parallax
A Tool for Automatic Parallelization of Deep Learning Training in Distributed Multi-GPU Environments.
JiwanChung/SlowFast
PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.
JiwanChung/transformers
🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.
JiwanChung/trl
Train transformer language models with reinforcement learning.
JiwanChung/VisualPun_UNPIE
Corpus to accompany: "Can visual language models resolve textual ambiguity with visual cues? Let visual puns tell you!"
JiwanChung/xlinkBook
a web base research management tool that deal with big data for everyone