Graph Learning Meetings

4/15/2022, lead: Chunxing Yin (Georgia Tech)

Agenda

4/1/2022, lead: Alok

Agenda

3/4/2022, lead: Yu-Hang

Agenda

  • Model compression for neural networks: Tensorizing Neural Networks.
  • Applications in DLRM, language models (?), and edge computing
  • Implications for parallelism as this increases depth of NN
  • Tensor-train times dense matrix multiplication as a computational primitive?

2/18/2022, lead: Vivek and Aydin

Agenda

2/4/2022, lead: all

Agenda

  • Organizational meeting

12/2/2021, lead: Andrew

Agenda

10/29/2021, lead: Yu-Hang.

Agenda

8/20/2021, lead: Nick S.

Agenda

8/6/2021, lead: Koby

Agenda

7/16/2021, lead: Aydin

Agenda

7/9/2021, lead: Aditi and Nick

Agenda

  • Ensemble learning

5/7/2021, lead: Yu-Hang

Agenda

4/30/2021, lead: Aydin

Agenda

4/16/2021, lead: Prashant

Agenda

3/12/2021, lead: Aditi

Agenda

2/26/2021, lead: Nick B.

Agenda

2/12/2021, lead: Nick S.

Agenda

1/29/2021, lead: none

Agenda

Minutes

  • Solving a series of successively harder DP problems with GNNs.
    • Needleman-Wunsch on pairs of reads
    • Smith-Waterman on pairs of reads
    • Sequence to graph alignment (potentially useful for pangenomes)
    • Many-to-many sequence alignment
    • Assembly on error-free reads
    • Assembly on erroneous reads

1/15/2021, lead: none

Agenda

Minutes

  • Discussion on GNNs vs CNNs, Transformers vs GNNs, and whether we need any induction bias.
  • Discussion on whether the test cases in the Correct&Smooth paper are too simple.
  • Discussion on whether the proposed C&S model is any easier to tune and/or run compared to GNNs.
  • We also talked about the issues with the authors' understanding of the topic in the Reddit post
  • Paper for potential future reading: https://arxiv.org/pdf/1806.01261.pdf