Pinned Repositories
Fi_GNN
[CIKM 2019] Code and dataset for "Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction"
GraphCTR
This repo includes some graph-based CTR prediction models and other representative baselines.
Adv-Instruct-Eval
dialogic
[EMNLP 2022] Code and data for "Controllable Dialogue Simulation with In-Context Learning"
Directional-Stimulus-Prompting
[NeurIPS 2023] Codebase for the paper: "Guiding Large Language Models with Directional Stimulus Prompting"
FnCTOD
Official code for the publication "Large Language Models as Zero-shot Dialogue State Tracker through Function Calling" https//arxiv.org/abs/2402.10466
instruction-following-robustness-eval
InternLM-XComposer
InternLM-XComposer2 is a groundbreaking vision-language large model (VLLM) excelling in free-form text-image composition and comprehension.
MMSci
MMSci: A Multimodal Multi-Discipline Dataset for PhD-Level Scientific Comprehension
ViTST
[NeurIPS 2023] The official repo for the paper: "Time Series as Images: Vision Transformer for Irregularly Sampled Time Series"."
Leezekun's Repositories
Leezekun/ViTST
[NeurIPS 2023] The official repo for the paper: "Time Series as Images: Vision Transformer for Irregularly Sampled Time Series"."
Leezekun/Directional-Stimulus-Prompting
[NeurIPS 2023] Codebase for the paper: "Guiding Large Language Models with Directional Stimulus Prompting"
Leezekun/dialogic
[EMNLP 2022] Code and data for "Controllable Dialogue Simulation with In-Context Learning"
Leezekun/MMSci
MMSci: A Multimodal Multi-Discipline Dataset for PhD-Level Scientific Comprehension
Leezekun/Adv-Instruct-Eval
Leezekun/instruction-following-robustness-eval
Leezekun/InternLM-XComposer
InternLM-XComposer2 is a groundbreaking vision-language large model (VLLM) excelling in free-form text-image composition and comprehension.
Leezekun/pptod
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System (ACL 2022)
Leezekun/Tool-Planner
Tool-Planner: Dynamic Solution Tree Planning for Large Language Model with Tool Clustering
Leezekun/apachecn-ds-zh
:book: [译] ApacheCN 数据科学译文集
Leezekun/awesome-AI-for-time-series-papers
A professional list of Papers, Tutorials, and Surveys on AI for Time Series in top AI conferences and journals.
Leezekun/FnCTOD
Official code for the publication "Large Language Models as Zero-shot Dialogue State Tracker through Function Calling" https//arxiv.org/abs/2402.10466
Leezekun/chat_templates
Chat Templates for HuggingFace Large Language Models
Leezekun/crowd-sampling-1
Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding
Leezekun/DL4MT
This is the repo of the homework of UCSB CS291K DL4MT.
Leezekun/DSP-Prompting
Leezekun/jailbreak_llms
[CCS'24] A dataset consists of 6,387 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 666 jailbreak prompts).
Leezekun/leezekun.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
Leezekun/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Leezekun/mmsci.github.io
The project website for "MMSci: A Multimodal Multi-Discipline Dataset for PhD-Level Scientific Comprehension"
Leezekun/multiwoz
Source code for end-to-end dialogue model from the MultiWOZ paper (Budzianowski et al. 2018, EMNLP)
Leezekun/mvts_transformer
Multivariate Time Series Transformer, public version
Leezekun/OPERA
[CVPR 2024] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
Leezekun/question_generation
Neural question generation using transformers
Leezekun/Raindrop
Graph Neural Networks for Irregular Time Series
Leezekun/SGConv
Leezekun/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Leezekun/trl
Train transformer language models with reinforcement learning.
Leezekun/tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.
Leezekun/zeno-build
Build, evaluate, analyze, and understand LLM-based apps