Pinned Repositories
acl2018-semantic-parsing-tutorial
Materials from the ACL 2018 tutorial on neural semantic parsing
AmbigQA
AmbigQA: Answering Ambiguous Open-domain Questions
bart-closed-book-qa
A BART version of an open-domain QA model in a closed-book setup
bert
TensorFlow code and pre-trained models for BERT
bert-as-service
Mapping a variable-length sentence to a fixed-length vector using BERT model
bi-att-flow
Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization.
capsule-mrc
基于capsule的观点型阅读理解模型
cmrc2017
The First Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC 2017)
CNN_for_classification
SpanBERT
Code for using and evaluating SpanBERT.
ldruth28's Repositories
ldruth28/SpanBERT
Code for using and evaluating SpanBERT.
ldruth28/AmbigQA
AmbigQA: Answering Ambiguous Open-domain Questions
ldruth28/bart-closed-book-qa
A BART version of an open-domain QA model in a closed-book setup
ldruth28/bert
TensorFlow code and pre-trained models for BERT
ldruth28/bert-as-service
Mapping a variable-length sentence to a fixed-length vector using BERT model
ldruth28/bi-att-flow
Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization.
ldruth28/capsule-mrc
基于capsule的观点型阅读理解模型
ldruth28/CNN_for_classification
ldruth28/CorefQA
Code base for the paper `CorefQA: Coreference Resolution as Query-based Span Prediction`
ldruth28/CQG
The codes for ACL2022 paper “CQG: A Simple and Effective Controlled Generation Framework for Multi-hop Question Generation
ldruth28/DecomP
Repository for Decomposed Prompting
ldruth28/DeepLearning-500-questions
深度学习500问,以问答形式对常用的概率知识、线性代数、机器学习、深度学习、计算机视觉等热点问题进行阐述,以帮助自己及有需要的读者。 全书分为15个章节,近20万字。由于水平有限,书中不妥之处恳请广大读者批评指正。 未完待续............ 如有意合作,联系scutjy2015@163.com 版权所有,违权必究 Tan 2018.06
ldruth28/Diffusion-LM-
Diffusion-LM
ldruth28/distance-encoding
Distance Encoding for GNN Design
ldruth28/FlowQA
Implementation of conversational QA model: FlowQA (with slight improvement)
ldruth28/FOLK
ldruth28/graph_nets
Build Graph Nets in Tensorflow
ldruth28/KdConv
KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation
ldruth28/LARK
LAnguage Representations Kit
ldruth28/mt-dnn
Multi-Task Deep Neural Networks for Natural Language Understanding
ldruth28/PrefixTuning
Prefix-Tuning: Optimizing Continuous Prompts for Generation
ldruth28/PRMLT
Matlab code for machine learning algorithms in book PRML
ldruth28/pyHGT
Pytorch Code for WWW'20 "Heterogeneous Graph Transformer", which is based on pytorch_geometric
ldruth28/pytorch-tutorial
PyTorch Tutorial for Deep Learning Researchers
ldruth28/ReAct
[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
ldruth28/SG-Deep-Question-Generation
This repository contains code and models for the paper: Semantic Graphs for Generating Deep Questions (ACL 2020).
ldruth28/Task-Oriented-Dialogue-Dataset-Survey
A dataset survey about task-oriented dialogue, including recent datasets.
ldruth28/tensorflow
An Open Source Machine Learning Framework for Everyone
ldruth28/transferlearning-tutorial
《迁移学习简明手册》LaTex源码
ldruth28/transformer
A TensorFlow Implementation of the Transformer: Attention Is All You Need