Language modeling inference complexity and training objective?
Closed this issue · 6 comments
Dear Shaojie, I am trying to understand the time complexity of generating a sentence from the DEQ model and the complexity of finding the perplexity of a sentence.
-
The DEQ model takes a sequence of observations, and generates a sequence of latent states of equal size. If I want to generate a sequence of length T, then I will need to first generate a sequence of length 1, then length 2 and so on till length T. So the generation time will be quadratic in the length of the sentence. Is this correct?
-
During training, we can generate an complete sequence of latent states for the full input sequence in one shot. But then ith hidden state z_i can depend on x_{i+1}. Does that mean that even at training time, we need to split a sentence into all of its prefixes? Also what is the training objective? Is it log-likelihood of predicting x_{i+1} given z_i through a linear+softmax layer? The paper does not go into these details, and I couldn't easily figure it out from the code.
Hi @se4u ,
Thanks for your interest in our repo! Regarding your questions:
- You do need to generate the words one by one (as in the case of any sequence model). However, you shouldn't need to re-compute the already generated tokens. For instance, to generate sequence [x1, x2, x3, x4] with a normal Transformer, you need to pass x1, (x1,x2), (x1,x2,x3) to the model, which could be quadratic due to re-computations. However, for DEQ, as the output is an equilibrium point, you can keep whatever that's been generated as "history padding". You thus only need to solve for the equilibrium of one word, not 4 words. Here is how this padding (or what I call "memory") is appended in code: https://github.com/locuslab/deq/blob/master/DEQModel/models/transformers/deq_transformer.py#L134. You can also see what I mean in Figure 1(b) of our paper (the "history padding" part).
Basically, given a starting word (x0*=)x1, you first generate its equilibrium x1*=x2. Then given with x0*=x1 as padding, x1*=x2 as input, you generate x2*=x3. After that, (x0*,x1*)=(x1,x2) will be the padding, x2*=x3 will be the input and you shall generate x3*=x4, etc.
- The paper provides experiments on language modeling, which is causal (cf. the 1st paragraph of Sec. 2). In this task, z_i cannot depend on x_{i+1}. However, DEQ can also be applied in cases where z_i does depend on x_{i+1} (e.g., machine translation encoder); the causality structure shouldn't matter. The training objective is exactly the same as any deep architecture; we used the cross-entropy loss (so linear + softmax + NLL loss) in training: https://github.com/locuslab/deq/blob/master/DEQModel/models/transformers/deq_transformer.py#L408
Let me know if these answered your questions :-)
for DEQ, as the output is an equilibrium point, you can keep whatever that's been generated as "history padding". You thus only need to solve for the equilibrium of one word, not 4 words.
My apologies but I am still a little confused, consider the equations for the trellis network.
This suggests that at time t we will have to recompute the equilibrium for all the hidden states up to time t. I.e.
- at t=1 we will give a dummy token (x₀) as input and compute z*₁ and generate x₁ (greedily using softmax)
- at t=2 we will give (x₀ x₁) as input, recompute z₁ and compute new z₂ -- because there is no guarantee that the new equilibrium point for the hidden state will be the same -- and then compute x₂ using z₂. The recomputed z₁ will be discarded.
So at time t we will need to recompute the hidden state sequence up to time t-1. Is this correct? Or did you mean that at time t=2 you will re-use z₁ generated at time t=1? This seems to be wrong to me because the z₁ generated at time t=1 need not be the same as the z*₁ generated at time t=2, because the sequences at equilibrium may be arbitrarily different.
Thank you again for taking the time to answer these questions.
No, that is incorrect. In sentence generation, at t=2 you only need to compute z2*, while holding z1* fixed. You shouldn't need to find the equilibrium point of t=1 again.
The way to think about this is to analyze in Transformer's QKV form: only t=2 exists in the query (Q); z1* will be directly appended to the key/value (K/V) part.
No, that is incorrect. In sentence generation, at t=2 you only need to compute z2*, while holding z1* fixed. You shouldn't need to find the equilibrium point of t=1 again.
Got it 👍Thanks a lot for clarifying this point.
However I just wanted to note -- and I think you'll agree -- that the equilibrium state found this way will be different from the equilibrium state found by the procedure that I outlined above. I.e. we can have a different procedure, say DEQ', which takes a sequence of tokens and produces a same size sequence of hidden states in one shot.
It's hard to say; what I observed is that as long as the model is causal (e.g., decoder Transformer, or temporal ConvNet), the equilibrium states obtained at sequence level or (one-by-one) token level are usually the same. It's hard to show that the equilibrium is unique, but empirically, this is usually what I observed to be true. May be easier to see this if we consider DEQ not as a Broyden unrolling but as an infinite-depth network.
as long as the model is causal
Ah got it, I think this was the key part, if the whole model is causal then z_i will never use information from x_{j > i}.