Pinned Repositories
academic-resume
A special academic or professional LaTeX resume template
alpaca-lora
Instruct-tune LLaMA on consumer hardware
An-Attention-based-Spatiotemporal-LSTM-Network-for-Next-POI-Recommendation
A python vision code of An Attention-based Spatiotemporal LSTM Network for Next POI Recommendation
CTVI-master
For ICDM 2021.
GEmodel-CIKM16-Reproduce
本仓库是对CIKM16文章(Learning Graph-based POI Embedding for Location-based Recommendation)的复现
LINE-Large-Scale-Information-Network-Embedding-Python
nlp-tutorial
Natural Language Processing Tutorial for Deep Learning Researchers
PPR-master
This is an implementation of the POI recommendation model-PPR.
Spatial-Temporal-Attention-Network-for-POI-Recommendation
Codes for a WWW'21 Paper. A state-of-the-art recommender system for location/trajectory prediction.
TKDD
dsj96's Repositories
dsj96/PPR-master
This is an implementation of the POI recommendation model-PPR.
dsj96/AlpacaDataCleaned
Alpaca dataset from Stanford, cleaned and curated
dsj96/bert_score
BERT score for text generation
dsj96/camel
🐫 CAMEL: Communicative Agents for “Mind” Exploration of Large Language Model Society (NeruIPS'2023) https://www.camel-ai.org
dsj96/ChatGPT-Next-Web
A well-designed cross-platform ChatGPT UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT 应用。
dsj96/COMET
A Neural Framework for MT Evaluation
dsj96/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
dsj96/detect-pretrain-code
This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu , Terra Blevins , Danqi Chen , Luke Zettlemoyer.
dsj96/DSE
Spatio-Temporal Representation Learning with Social Tie forPersonalized POI Recommendation
dsj96/easy-rl
强化学习中文教程(蘑菇书),在线阅读地址:https://datawhalechina.github.io/easy-rl/
dsj96/GPT-4-LLM
Instruction Tuning with GPT-4
dsj96/joeynmt
Minimalist NMT for educational purposes
dsj96/LLaMA-Factory
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
dsj96/llama2.c
Inference Llama 2 in one file of pure C
dsj96/lm-evaluation-harness
A framework for few-shot evaluation of language models.
dsj96/MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
dsj96/MetaGPT
🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo
dsj96/mt-bigscience
Evaluation results for Machine Translation within the BigScience project
dsj96/neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
dsj96/NLP-progress
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
dsj96/Pareto-Mutual-Distillation
Implementation of Pareto-Mutual-Distillation (paper: Towards Higher Pareto Frontier in Multilingual Machine Translation)
dsj96/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
dsj96/prize
A prize for finding tasks that cause large language models to show inverse scaling
dsj96/Prompt-Engineering-Guide
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
dsj96/RankGPT
Is ChatGPT Good at Search? LLMs as Re-Ranking Agent [EMNLP 2023 Outstanding Paper Award]
dsj96/ReAct
[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
dsj96/ReAgent
A platform for Reasoning systems (Reinforcement Learning, Contextual Bandits, etc.)
dsj96/Rememberer
Rememberer & RLEM
dsj96/SCM4LLMs
Self-Controlled Memory System for LLMs
dsj96/Simple_LLM_DPO