NeuralODE with multiple features?
Closed this issue · 3 comments
Hi,
I am just getting my feet wet with this material so I apologize if this is a naive question.
The spiral problem you showed for NeuralODE
has two outputs, one input (time), and unknown matrix coefficients. How would one implement an ODE in the same fashion with multiple time-dependent features, in addition to time itself?
Hi, could you provide me example ODE you would like to implement ?
So, the example we are dealing with has a lot of pieces.
I am working for a spacecraft and we need to predict the temperature of various components as a function of time due to various forcings.
These include:
• solar heating, which depends on spacecraft attitude, position and distance from the sun
• passive cooling out to space
• heating from nearby electronics boxes
Our approach for a number of years has been to assume functional forms for each of these forcings with unknown constants, and then fit to historical temperature data using chi-squared fitting. This has worked very well, but there are some behaviors which our models cannot capture, presumably not due to unknown inputs but the fact that our functional forms are too simple or incomplete. We have all of the features we think we need, the question is how does the rate of change of temperature depend on them?
I realize this is a large jump in complexity from the models you laid out in this source code.
Thanks!
Hi, is this toy example what you want to obtain ?
import numpy as np
import numpy.random as npr
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
import tensorflow.contrib.eager as tfe
keras = tf.keras
tf.enable_eager_execution()
import neural_ode as node
class NNModule(tf.keras.Model):
def __init__(self, num_filters):
super(NNModule, self).__init__(name="Module")
self.dense_1 = keras.layers.Dense(num_filters, activation="tanh")
self.t_dense_1 = keras.layers.Dense(num_filters, activation=lambda x: tf.sin(x))
def call(self, inputs, **kwargs):
# x = [batch_size, 3]
# t = [1]
t, x = inputs
# t = [batch_size, 1]
t = tf.tile(tf.reshape(t, [1, 1]), [12, 1])
# t1 = [batch_size, num_filters]
t1 = self.t_dense_1(t)
# h = [batch_size, num_filters]
h = self.dense_1(x) + t1
return h
# create model and ode solver
model = NNModule(num_filters=3)
ode = node.NeuralODE(
model, t=np.linspace(0, 1.0, 20),
solver=neural_ode.rk4_step
)
x0 = tf.random_normal(shape=[12, 3])
with tf.GradientTape() as g:
g.watch(x0)
xN = ode.forward(x0)
# some loss function here e.g. L2
loss = xN ** 2
# reverse mode with adjoint method
dLoss = g.gradient(loss, xN)
x0_rec, dLdx0, dLdW = ode.backward(xN, dLoss)
It solves following ODE:
dx / dt = tanh( Wx * x + bx ) + cos( Wt * t + bt)
Note: please note that my implementation maybe not the efficient one, I think the pytorch solver should be more reliable.