Pinned Repositories
2023-awesome-instruction-learning
Papers and Datasets on Instruction Learning / Instruction Tuning. ✨✨✨
2024_acl_paying_attention_to_the_source
The official repository of the ACL 2024 Findings paper "Paying More Attention to Source Context: Mitigating Unfaithful Translations from Large Language Model"
ATLOP
Source code for paper "Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling", AAAI 2021
cnn-rnf
Convolutional Neural Networks with Recurrent Neural Filters
Code-ZSRE
The official source code of our paper: "Enhancing Semantic Correlation between Instances and Relations for Zero-Shot Relation Extraction", Journal of Natural Language Processing.
mtl-da-emnlp
Code to reproduce the experiments presented in the EMNLP 2021 paper "Rethinking data augmentation for low-resource neural machine translation: a multi-task learning approach"
neat-vision
Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
OTE-MTL
Code and dataset for Findings of EMNLP 2020 paper titled "A Multi-task Learning Framework for Opinion Triplet Extraction"
trimf
vhientran.github.io
My personal website
vhientran's Repositories
vhientran/Code-ZSRE
The official source code of our paper: "Enhancing Semantic Correlation between Instances and Relations for Zero-Shot Relation Extraction", Journal of Natural Language Processing.
vhientran/2024_acl_paying_attention_to_the_source
The official repository of the ACL 2024 Findings paper "Paying More Attention to Source Context: Mitigating Unfaithful Translations from Large Language Model"
vhientran/vhientran.github.io
My personal website
vhientran/2022NAACL-TreeMix
Code for 2022 NAACL paper TreeMix
vhientran/2023-awesome-instruction-learning
Papers and Datasets on Instruction Learning / Instruction Tuning. ✨✨✨
vhientran/2023-TreeSwap
Complimentary code for our paper TreeSwap: Data Augmentation for Machine Translation via Dependency Subtree Swapping (RANLP 2023)
vhientran/2023-Adaptive-MT-LLM-Fine-tuning
Fine-tuning Mistral LLM for Adaptive Machine Translation
vhientran/2023-HOLLY-benchmark
This repository contains a dataset for semantically appropriate application of lexical constraints in NMT.
vhientran/2023-Perturbation-basedQE
Perturbation-based QE: An Explainable, Unsupervised Word-level Quality Estimation Method for Blackbox Machine Translation
vhientran/2023-PMC-LLaMA
The official codes for "PMC-LLaMA: Towards Building Open-source Language Models for Medicine"
vhientran/2023-TIM
code for Teaching LM to Translate with Comparison
vhientran/2023-WICL
Code for EMNLP 2023 Findings paper: "Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning"
vhientran/2024naacl_icl_anti-lm_decoding
vhientran/AAAI23-Mono4SiMT
Implementation of the paper “Improving Simultaneous Machine Translation with Monolingual Data”.
vhientran/Awesome-LLM-MT
vhientran/CAD
Unofficial re-implementation of "Trusting Your Evidence: Hallucinate Less with Context-aware Decoding"
vhientran/CLaP
Code for our NAACL-2024 paper "Contextual Label Projection for Cross-Lingual Structured Prediction"
vhientran/DecoMT
DecoMT
vhientran/EMNLP2023_ParroT
The ParroT framework to enhance and regulate the Translation Abilities during Chat based on open-sourced LLMs (e.g., LLaMA-7b, Bloomz-7b1-mt) and human written translation and evaluation data.
vhientran/ICCASSP23-NMT-targeted-attack
Adversarial Attack against NMT
vhientran/knn-seq
Efficient, Extensible kNN-MT Framework
vhientran/LLMSurvey
The official GitHub page for the survey paper "A Survey of Large Language Models".
vhientran/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
vhientran/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
vhientran/Okapi
Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback
vhientran/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
vhientran/PFT-ef-EAMT23
Terminology experiments on the Canadian Hansard — Expériences de terminologie sur le Hansard canadien (EAMT 2023)
vhientran/qlora
QLoRA: Efficient Finetuning of Quantized LLMs
vhientran/swie_overmiss_llm4mt
Code for "Improving Translation Faithfulness of Large Language Models via Augmenting Instructions"
vhientran/translation_llm