two dimensions
philipus opened this issue · 3 comments
may I use the code also for more than one dimension like 2D problems as in your paper?
I map the 2D into 1D. I guess from the technical point it is the same, right?!
best
Post Scriptum: does it makes sense when you have also property dimension attached. I imagine that you could also calculate some distances in this multi dimensional property space for getting a reasonable adjacent matrix.
What do you think?
For 2D problems (like data scattered on a grid), the idea is indeed to "flatten" the data, and build an adjacency matrix that connects neighbors on the grid.
I am sorry but I didn't understand what you mean by "property dimension". Could you please elaborate?
Sorry for answering late
When you have spatial close region you can set a 1 in a relation table for the two time series. When you do not have spatial information but other information like properties of metal or properties of a device which are not the same but close to each other. Mathematically speaking when you have a property space of N entities/devices/metals/... you could apply some sort of clustering Algo to get information how close the entities/devices/metals/... are to each other. This you could use as a information in the adjacent/relation matrix.
Stnn_d does not need such a matrix, right?! Because it would find the relations by itself. I tried stnn, stnn_r and stnn_d with some non physical data. When I get a complete random picture of relations from my stnn_d run, it means that those relations are just spatial but not infecting the time series of the others , right?!
Indeed, STNN-D does not need any adjacency matrix as it tries to discover it. If STNN-D puts a non-zero weight on a relation in the adjacency matrix, it will use it in the prediction function. If you think the learned relations are random, you can try to increase the L1 regularization on relation discovery (--l1_rel
) to force the network to learn sparse relations.