Pinned Repositories
LLMsEasyFinetune
An easy-to-run implementation for finetuning large language models (LLMs) such as llama and gemma, supporting full parameter finetuning, LoRA, and QLoRA.
paper-assistant
This repo implrements an easy-deployed assistant that can help you to understand research paper, especially for scholar paper, supporting English, Chinese and multiple languages. We provide a web UI and demo.
llama.cpp
LLM inference in C/C++
abcboost_unlearning
Brain_Decoding_Analysis
It is a repo about Brain Decoding Analysis.
fMRI_parameter-free_attention
This repo includes the experiment codes and experiment results for the Skip Attention Module (SAM). The SAM is a parameter-free attention module using in fMRI decoding to increase the decoding performance, which can be able to stack on any architecture of convolutional neural network without any parameters that need to train, and does not increase any burden of training.
GBDT_unlearning
The implementation for paper Machine Unlearning in Gradient Boosting Decision Trees (Accepted on KDD 2023), supporting training and unlearning.
HCP_Dataset_Download_Automatically_Script
This script can download the HCP dataset automatically from amazon s3 browser by using python. You can download tfMRI, rfMRI, dfMRI, MEG etc. dataset from amazon s3 of HCP.
huawei-lin.github.io
AcadHomepage: A Modern and Responsive Academic Personal Homepage
RapidIn
The implementation for paper "Token-wise Influential Training Data Retrieval for Large Language Models" (Accepted on ACL 2024).
huawei-lin's Repositories
huawei-lin/HCP_Dataset_Download_Automatically_Script
This script can download the HCP dataset automatically from amazon s3 browser by using python. You can download tfMRI, rfMRI, dfMRI, MEG etc. dataset from amazon s3 of HCP.
huawei-lin/fMRI_parameter-free_attention
This repo includes the experiment codes and experiment results for the Skip Attention Module (SAM). The SAM is a parameter-free attention module using in fMRI decoding to increase the decoding performance, which can be able to stack on any architecture of convolutional neural network without any parameters that need to train, and does not increase any burden of training.
huawei-lin/Brain_Decoding_Analysis
It is a repo about Brain Decoding Analysis.
huawei-lin/GBDT_unlearning
The implementation for paper Machine Unlearning in Gradient Boosting Decision Trees (Accepted on KDD 2023), supporting training and unlearning.
huawei-lin/RapidIn
The implementation for paper "Token-wise Influential Training Data Retrieval for Large Language Models" (Accepted on ACL 2024).
huawei-lin/abcboost_unlearning
huawei-lin/huawei-lin.github.io
AcadHomepage: A Modern and Responsive Academic Personal Homepage
huawei-lin/machine_unlearning
Existing Literature about Machine Unlearning
huawei-lin/huawei-lin
huawei-lin/learnable_watermarking
huawei-lin/llama.cpp
Port of Facebook's LLaMA model in C/C++
huawei-lin/LLMsInfluenceFunc
huawei-lin/pdf-assistant
This repo implrements an easy-deployed assistant that can help you to understand pdf files, especially for scholar paper, supporting English, Chinese and multuple languages. We provide a web UI and demo.
huawei-lin/RapidIn_Private_Submission