sevakon/shallow-rnns

Is it possible to get an example where you train shallow-rnn on a time series datasets?

kndbvortex opened this issue · 0 comments

Hello,

I'm interested in exploring the application of Recurrent Neural Networks (RNNs) for time series classification. I came across your repository, but I encountered difficulties in running it with time series datasets such as FordA. I attempted a few approaches, but I'm sure that it's wrong. Could you provide me assistance, please?

x_train, x_test, y_train, y_test = get_data_test()
x_train = x_train.reshape((x_train.shape[0], 1, -1))
x_test = x_test.reshape((x_test.shape[0], 1, -1))
rnn = ShallowRNN(x_train.shape[-1], 1, "LSTM", [512, 512], [0.0, 0.0])
k = 32

train_data = ShallowRNN.split_by_bricks(torch.FloatTensor(x_train), k)
test_data = ShallowRNN.split_by_bricks(torch.FloatTensor(x_test), k)
y_train = torch.FloatTensor(y_train.reshape((-1, 1)))

y_test = torch.FloatTensor(y_test.reshape((-1, 1)))
learning_rate = 0.001
criterion = nn.NLLLoss()

optimizer = torch.optim.SGD(rnn.parameters(), lr=learning_rate)

seq_dim = x_train.shape[-1]
loss_list = []
iteration_list = []
accuracy_list = []
count = 0
error = nn.CrossEntropyLoss()

for epoch in range(10):
    for i, (batch_ts, labels) in enumerate(zip(train_data, y_train)):

        # train = Variable(images.view(-1, seq_dim, input_dim))
        # labels = Variable(labels)

        # Clear gradients
        optimizer.zero_grad()

        # Forward propagation
        output = rnn(batch_ts, 1)

        print(output.shape, labels.shape, output, labels)
        loss = error(output, labels)

        # Calculating gradients
        loss.backward()

        # Update parameters
        optimizer.step()