Moxinli's Stars
exacity/deeplearningbook-chinese
Deep Learning Book Chinese Translation
AMAI-GmbH/AI-Expert-Roadmap
Roadmap to becoming an Artificial Intelligence Expert in 2022
ymcui/Chinese-BERT-wwm
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
princewen/tensorflow_practice
tensorflow实战练习,包括强化学习、推荐系统、nlp等
Kyubyong/transformer
A TensorFlow Implementation of the Transformer: Attention Is All You Need
stanford-futuredata/ColBERT
ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)
rguo12/awesome-causality-algorithms
An index of algorithms for learning causality with data
0voice/campus_recruitmen_questions
2021年最新整理,5000道秋招/提前批/春招/常用面试题(含答案),包括leetcode,校招笔试题,面试题,算法题,语法题。
nushackers/notes-to-cs-freshmen-from-the-future
Notes to (NUS) Computer Science Freshmen, From The Future (Original by @ejamesc)
google-research/tapas
End-to-end neural table-text understanding models.
causaltext/causal-text-papers
Curated research at the intersection of causal inference and natural language processing.
facebookresearch/TaBERT
This repository contains source code for the TaBERT model, a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).
nyu-dl/dl4marco-bert
Wuziyi616/Graduate_Application
Documents used for grad school application
llamazing/numnet_plus
This is the official code repository for NumNet+(https://leaderboard.allenai.org/drop/submission/blu418v76glsbnh1qvd0)
nyu-dl/dl4ir-doc2query
ricsinaruto/dialog-eval
Evaluate your dialog model with 17 metrics! (see paper)
ag1988/injecting_numeracy
The accompanying code for "Injecting Numerical Reasoning Skills into Language Models" (Mor Geva*, Ankit Gupta* and Jonathan Berant, ACL 2020).
jingtaozhan/RepBERT-Index
RepBERT is a competitive first-stage retrieval technique. It represents documents and queries with fixed-length contextualized embeddings. The inner products of them are regarded as relevance scores. Its efficiency is comparable to bag-of-words methods.
m3yrin/naqanet_notebook
Testing NAQANet
hfef7ui2/final_year_project_kgCVAE