christian-clark's Stars
lifengjin/db-pcfg
Depth-Bounded PCFG Induction
nabenabe0928/tpe
The tree-structured Parzen estimator (TPE) implementation and the simple running code for it
Andrea-de-Varda/local_attention_reading_times
zhaojinm/Probabilistic_Lambek_Categorial_Sequent
dcavar/q
Quantum Computing and Algorithms
albietz/transformer-birth
byungdoh/slm_surprisal
LM training using GPT-NeoX ("Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens," Findings of EMNLP23)
Ryosuke-Yamaki/Hol-CCG
Holographic CCG Parsing.
stanfordnlp/dspy
DSPy: The framework for programming—not prompting—foundation models
jramapuram/SimulatedAnnealing
Pytorch Optimizer for Simulated Annealing
Jamie-Stirling/RetNet
An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"
amrisi/amr-guidelines
goodmami/penman
PENMAN notation (e.g. AMR) in Python
nschneid/amr-hackathon
Abstract Meaning Representation (AMR) Hackathon
acl-org/aclpubcheck
Tools for checking ACL paper submissions
karpathy/arxiv-sanity-lite
arxiv-sanity lite: tag arxiv papers of interest get recommendations of similar papers in a nice UI using SVMs over tfidf feature vectors based on paper abstracts.
acheong08/ChatGPT
Reverse engineered ChatGPT API
riffusion/riffusion-app-hobby
Stable diffusion for real-time music generation (web app)
sustcsonglin/TN-PCFG
source code of NAACL2021 "PCFGs Can Do Better: Inducing Probabilistic Context-Free Grammars with Many Symbols“ and ACL2021 main conference "Neural Bilexicalized PCFG Induction"
christos-c/tree-viewer
A d3.js syntax tree viewer
rusty1s/pytorch_sparse
PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations
jmichaelov/PsychFormers
Tools for calculating psycholinguistically-relevant metrics of language statistics using transformer language models
rain1024/slp2-pdf
Speech and Language Processing, 2nd Edition in PDF format
festvox/flite
A small fast portable speech synthesis system
lifengjin/charInduction
inverse-scaling/prize
A prize for finding tasks that cause large language models to show inverse scaling
mohsenfayyaz/GlobEnc
[NAACL 2022] GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
thomaslu2000/Incremental-Parsing-Representations
facebookresearch/metaseq
Repo for external large-scale work
antonisa/lang2vec
A simple library for querying the URIEL typological database.