google-deepmind/dm-haiku

Use of numpy for causal masking in transformer example

smonsays opened this issue · 2 comments

In the transformer example the causal mask is constructed using numpy instead of jax.numpy. I am assuming this is deliberate but it is not obvious to me what the benefits are of doing this. Does the mask not materialize on the GPU by using numpy? Maybe @aslanides has an explanation, I would be very curious.

Thanks in advance!

Hi @smonsays, thank you for the interesting question!

In general I think it would have been better to use jnp, my perspective on this is that we should be passing the compiler as much information and structure as we reasonably can (e.g. rather than passing it a constant, we should pass it the expression to create that constant).

That said, I ran a couple of benchmarks, and it seems that for this toy example it is actually more efficient for the small sequence length chosen in the example. So np is the optimal choice for the example with XLA on GPU as implemented today.

https://colab.research.google.com/gist/tomhennigan/3b998bf8ab5c9badfc1281042f8296d0/use-of-np-in-causal-mask.ipynb

Looking at the optimized programs generated by XLA, it seems like when you use jnp that XLA has decided to fuse the tril operation into a kernel which also applies the mask (as opposed to evaluating it as a constant value at compile time and making that constant part of the program). For larger sequence lengths I guess this increase in compute is offset by the fact you can avoid loading the constant from HBM.

It's quite possible that for a full transformer model (my benchmark only looks at generating and applying the causal mask) the difference in execution time will be in the noise of the overall step.

Wow, thanks @tomhennigan for the comprehensive answer. It is very insightful to visualise the compiled XLA the way you did. I guess I'll trust the XLA compiler to be the smart one in the future.