TV with learneable D
Closed this issue · 3 comments
Thanks for the amazing work!
I was wondering if you guys have also tried to write an implementation where D can be fed into the TV layer as a parameter and get gradients for the corresponding D matrix? (Specifically D could be parameterized using a 1xN vector or 2xNxC (N=kernel length (say 2 or 3), C=number of channels) which is what would be fed in and used for gradient computation.
And if I had to try to implement that on top of your current implementation, could you point me towards how I could go about it? Thanks!
Hi @swami1995,
We have not tried learning the difference matrix D. The TV solver takes advantage of the specific structure of D for speedup.
It would no longer be a TV problem if D is being learned, hence, one would have to utilized a different solver.
Best,
Raymond
This issue is stale because it has been open for 14 days with no activity.
This issue was closed because it has been inactive for 7 days since being marked as stale.