Pinned Repositories
Active-NER
Bayesian Deep Active Learning for Named entity recognition (NER)
Active-NLP
Bayesian Deep Active Learning for Natural Language Processing Tasks
Attention-PyTorch
注意力机制实践
Auto_KVC
awesome-multimodal-ml
Reading list for research topics in multimodal machine learning
dynamic_social_networks
CSC 555 Social Computing - Project | Social network analysis traditionally makes use of unweighted graphs, with no other information between peers except for acknowledgement of existence of a link. In our work, we attempt to construct a weighted graph using user interactions, where weights represent the degree of recent frequent interaction. We reward the links among users for each interaction and also decay those rewards over time. We therefore, use such recent interactions as evidence of a stronger bond and assign weights accordingly. Finally, we optimize our values of rewards and decay using some power laws which are exhibited by real world weighted networks
ExHalder
Module-based-few-shot-event-extraction
PRBoost
The codes for our ACL'22 paper: PRBOOST: Prompt-Based Rule Discovery and Boosting for Interactive Weakly-Supervised Learning.
SeqMix
The repository for our EMNLP'20 paper SeqMix: Augmenting Active Sequence Labeling via Sequence Mixup.
rz-zhang's Repositories
rz-zhang/SeqMix
The repository for our EMNLP'20 paper SeqMix: Augmenting Active Sequence Labeling via Sequence Mixup.
rz-zhang/PRBoost
The codes for our ACL'22 paper: PRBOOST: Prompt-Based Rule Discovery and Boosting for Interactive Weakly-Supervised Learning.
rz-zhang/Module-based-few-shot-event-extraction
rz-zhang/dynamic_social_networks
CSC 555 Social Computing - Project | Social network analysis traditionally makes use of unweighted graphs, with no other information between peers except for acknowledgement of existence of a link. In our work, we attempt to construct a weighted graph using user interactions, where weights represent the degree of recent frequent interaction. We reward the links among users for each interaction and also decay those rewards over time. We therefore, use such recent interactions as evidence of a stronger bond and assign weights accordingly. Finally, we optimize our values of rewards and decay using some power laws which are exhibited by real world weighted networks
rz-zhang/ExHalder
rz-zhang/Active-NER
Bayesian Deep Active Learning for Named entity recognition (NER)
rz-zhang/Active-NLP
Bayesian Deep Active Learning for Natural Language Processing Tasks
rz-zhang/Auto_KVC
rz-zhang/awesome-multimodal-ml
Reading list for research topics in multimodal machine learning
rz-zhang/baselines
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
rz-zhang/bert
TensorFlow code and pre-trained models for BERT
rz-zhang/BERT-BiLSTM-CRF-NER
Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services
rz-zhang/BERT-NER
Pytorch-Named-Entity-Recognition-with-BERT
rz-zhang/Bert_Attempt
rz-zhang/DORM
rz-zhang/hf-text-classification
rz-zhang/HMEAE
Source code for EMNLP-IJCNLP 2019 paper "HMEAE: Hierarchical Modular Event Argument Extraction".
rz-zhang/lstm-crf-pytorch
LSTM-CRF in PyTorch
rz-zhang/modAL
A modular active learning framework for Python
rz-zhang/Named-Entity-Recognition-BidirectionalLSTM-CNN-CoNLL
Keras implementation of "Few-shot Learning for Named Entity Recognition in Medical Text"
rz-zhang/Paper
rz-zhang/Reddit-Roles-Identification
Identify roles of redditors participated in political discussions on Reddit
rz-zhang/rz-zhang.github.io
rz-zhang/ScaleBiO
This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting
rz-zhang/SLTK
序列化标注工具,基于PyTorch实现BLSTM-CNN-CRF模型,CoNLL 2003 English NER测试集F1值为91.10%(word and char feature)。
rz-zhang/Social_networks
Draw graphs of relationships between users based on recursive scraping of follower / following status.
rz-zhang/TextFooler
A Model for Natural Language Attack on Text Classification and Inference
rz-zhang/TFKD
rz-zhang/thd-llm
rz-zhang/transformers
🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.