paper-review

  1. Training language models to follow instructions with human feedback
  2. Zero-Resource Cross-Domain Named Entity Recognition
  3. Zero-Resource Cross-Lingual Named Entity Recognition
  4. ViT : An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
  5. CAM : Learning Deep Features for Discriminative Localization
  6. MoE : OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER
  7. Swin Transformer : Swin Transformer: Hierarchical Vision Transformer using Shifted Windows