twitter-archive/torch-autograd

Having issue autograd RecurrentLSTMNetwork

rnunziata opened this issue · 1 comments

I am new to torch and lua. I am using lutorpy which allows me to call torch via python. I am trying LSTM to progess 80*80 images but am having a problem setting up . If there is a blog or group can you please direct me to this I do not know if this is a issue with autograd or just the way I am using.

import lutorpy as lua
import numpy as np

autograd = require("autograd")
autograd.optimize(True)

# This code uses lutorpy which allows for interfacing python and torch
# I want to pass a sequence of N images 80x80 for training:
# The following code does not work....I am new to torch and machine learning.
# Any help pointing me to solution or example code 

lstm, params = autograd.model.RecurrentLSTMNetwork(
    inputFeatures=80*80, hiddenFeatures=4, outputType='all')

# flaten images
states      = np.zeros((1,80*80))
next_states = np.zeros((1,80*80))

xts  = torch.fromNumpyArray(states)
xtns = torch.fromNumpyArray(next_states)

#create a lua table
t = lua.table() 
W1 = np.random.randn(4,80*80) # here I changed because I got error with your code
b1 = np.random.randn(1,80*80)


t['W'] = torch.fromNumpyArray(W1) 
t['b'] = torch.fromNumpyArray(b1)
params[1] = t

# loop over N images in seq ..or is there a way to pass a batch of 80x80 images to the LSTM?
#
output_features, target_params = lstm(params[1], xtns, xts)


print(output_features)



##############################################################################
# These setting in the above work and produce output
##############################################################################

# lstm, params = autograd.model.RecurrentLSTMNetwork(
#      inputFeatures=4, hiddenFeatures=4, outputType='all')

# states      = np.zeros((4, 4))
# next_states = np.zeros((4, 4))
# xts  = torch.fromNumpyArray(states)
# xtns = torch.fromNumpyArray(next_states)

# W1 = np.random.randn(8,16) # here I changed because I got error with your code
# b1 = np.random.randn(1,16)



Hey, I hadn't heard of this lutorpy project, sounds cool.

Two things:

  1. Let's disentangle any bugs that occur because of this Lua/Python bridge.
    Does the code giving you problems work when run from just Lua?
  2. I'm not seeing any error messages pasted here. What actually goes wrong
    with the uncommented code?

On Fri, Jun 3, 2016 at 5:26 PM Richard Nunziata notifications@github.com
wrote:

I am new to torch and lua. I am using lutorpy which allows me to call
torch via python. I am trying LSTM to progess 80*80 images but am having a
problem setting up . If there is a blog or group can you please direct me
to this I do not know if this is a issue with autograd or just the way I am
using.

import lutorpy as lua
import numpy as np

autograd = require("autograd")
autograd.optimize(True)

This code uses lutorpy which allows for interfacing python and torch

I want to pass a sequence of N images 80x80 for training:

The following code does not work....I am new to torch and machine learning.

Any help pointing me to solution or example code

lstm, params = autograd.model.RecurrentLSTMNetwork(
inputFeatures=80*80, hiddenFeatures=4, outputType='all')

flaten images

states = np.zeros((1,80_80))
next_states = np.zeros((1,80_80))

xts = torch.fromNumpyArray(states)
xtns = torch.fromNumpyArray(next_states)

#create a lua table
t = lua.table()
W1 = np.random.randn(4,80_80) # here I changed because I got error with your code
b1 = np.random.randn(1,80_80)

t['W'] = torch.fromNumpyArray(W1)
t['b'] = torch.fromNumpyArray(b1)
params[1] = t

loop over N images in seq ..or is there a way to pass a batch of 80x80 images to the LSTM?

output_features, target_params = lstm(params[1], xtns, xts)

print(output_features)

##############################################################################

These setting in the above work and produce output

##############################################################################

lstm, params = autograd.model.RecurrentLSTMNetwork(

inputFeatures=4, hiddenFeatures=4, outputType='all')

states = np.zeros((4, 4))

next_states = np.zeros((4, 4))

xts = torch.fromNumpyArray(states)

xtns = torch.fromNumpyArray(next_states)

W1 = np.random.randn(8,16) # here I changed because I got error with your code

b1 = np.random.randn(1,16)


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#128, or mute the thread
https://github.com/notifications/unsubscribe/AAJ4j0aDhTZH7qRxvZ0-uZJioRtKSXOcks5qIJwMgaJpZM4It7BH
.