FrankZhao1999's Stars
fengdu78/Coursera-ML-AndrewNg-Notes
吴恩达老师的机器学习课程个人笔记
DS-100/sp22
sp22 public facing repo
asappresearch/structshot
Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning
thunlp/Few-NERD
Code and data of ACL 2021 paper "Few-NERD: A Few-shot Named Entity Recognition Dataset"
ShuheWang1998/GPT-NER
THUDM/ChatGLM-6B
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
ishan0102/vimGPT
Browse the web with GPT-4V and Vimium
open-mmlab/mmpose
OpenMMLab Pose Estimation Toolbox and Benchmark.
suhejian/CADEC-data-process
对CADEC数据的处理
solkx/TOE
cslydia/Hire-NER
Codes for the paper Hierarchical Contextualized Representation for Named Entity Recognition
spyysalo/jnlpba
Tools and resources related to the JNLPBA corpus
dainlp/acl2020-transition-discontinuous-ner
google-research/tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.
zqtan1024/sequence-to-set
fastnlp/TENER
Codes for "TENER: Adapting Transformer Encoder for Named Entity Recognition"
yahshibu/nested-ner-tacl2020-transformers
Implementation of Nested Named Entity Recognition using BERT
yhcc/BARTNER
ljynlp/W2NER
Source code for AAAI 2022 paper: Unified Named Entity Recognition as Word-Word Relation Classification
Wensi-Tang/OS-CNN
ShannonAI/mrc-for-flat-nested-ner
Code for ACL 2020 paper `A Unified MRC Framework for Named Entity Recognition`
open-mmlab/mmdetection
OpenMMLab Detection Toolbox and Benchmark
gaomingqi/Track-Anything
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
xtekky/gpt4free
The official gpt4free repository | various collection of powerful language models
formulahendry/955.WLB
955 不加班的公司名单 - 工作 955,work–life balance (工作与生活的平衡)
thuml/Autoformer
About Code release for "Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting" (NeurIPS 2021), https://arxiv.org/abs/2106.13008
cure-lab/LTSF-Linear
[AAAI-23 Oral] Official implementation of the paper "Are Transformers Effective for Time Series Forecasting?"
IkeYang/machine-vision-assisted-deep-time-series-analysis-MV-DTSA-
yuqinie98/PatchTST
An offical implementation of PatchTST: "A Time Series is Worth 64 Words: Long-term Forecasting with Transformers." (ICLR 2023) https://arxiv.org/abs/2211.14730
Significant-Gravitas/AutoGPT
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.