CGCL-codes/naturalcc

questions about paper "A Structural Analysis of Pre-Trained Language Models for Source Code"

skye95git opened this issue · 8 comments

  1. The high variability would suggest a content-dependent head, while low variability would indicate a content-independent head.

Figure 7: Visualization of attention heads in CodeBERT, along with the value of attention analysis ( 𝑝 𝛼 ( 𝑓 )), and attention variability, given a Python code snippet.

What are high, low and attention variability?

  1. What are the inputs and outputs of models in Syntax Tree Induction?
  2. Why is it content dependent?

Snipaste_2022-05-27_11-41-24

Thanks to propose these questions;
First, the variability is attention variability, and we think the high value is content-dependent and the low is content-independent.
Second, Our input is the pruned AST and code snippet, i.e. no symbols(for example: https://drive.google.com/file/d/1FMgABZMACAv8OjU7wcMliMqc9m3_APQ1/view?usp=sharing)
Third, it is high variability, and we consider that the attention distribution does not depend on the location at this head.
Hope to help you!

Thanks to propose these questions; First, the variability is attention variability, and we think the high value is content-dependent and the low is content-independent. Second, Our input is the pruned AST and code snippet, i.e. no symbols(for example: https://drive.google.com/file/d/1FMgABZMACAv8OjU7wcMliMqc9m3_APQ1/view?usp=sharing) Third, it is high variability, and we consider that the attention distribution does not depend on the location at this head. Hope to help you!

Thanks for your reply!

  1. So, the attention variability is calculated according to Formula 5, right? If the calculated value is high, it is considered content-dependent head, otherwise, it is considered content-dependent content-independent head.
  2. Does it also reflect that different heads are paying attention to different information?
    image

If input is the pruned AST and code snippet, what are the outputs of models in Syntax Tree Induction?
The output is is there an edge between the two nodes, right?

Actually, the purned AST strcuture is the gold standard, and we use the method(in our paper) to induce a binary tree,and compute the similarity between the two trees.

Actually, the purned AST strcuture is the gold standard, and we use the method(in our paper) to induce a binary tree,and compute the similarity between the two trees.

Thanks for your reply! I didn't understand what induce meant. Induce a binary tree mean generate an AST from zero, or predict edges only?

Actually, the purned AST strcuture is the gold standard, and we use the method(in our paper) to induce a binary tree,and compute the similarity between the two trees.

Thanks for your reply! I didn't understand what induce meant. Induce a binary tree mean generate an AST from zero, or predict edges only?

Yes, Induce a binary tree means generate a tree from zero.

Thanks. There are two final questions:

  1. So, the attention variability is calculated according to Formula 5, right? If the calculated value is high, it is considered content-dependent head, otherwise, it is considered content-dependent content-independent head.
  2. Does it also reflect that different heads are paying attention to different information?
    image

Yes, your understanding is right.