mil-tokyo/webdnn

Cannot convert single Keras Convolution2d layer

kylemcdonald opened this issue · 1 comments

Here is the Dockerfile for the environment I'm using:

FROM continuumio/miniconda3

RUN apt-get -y update

RUN conda install -y jupyter

RUN pip install https://github.com/sigilioso/tensorflow-build/raw/master/tensorflow-1.4.0-cp36-cp36m-linux_x86_64.whl

RUN pip install keras==2.1.3 && \
    pip install h5py

RUN cd ~ && \
        git clone https://github.com/mil-tokyo/webdnn && \
        cd webdnn && \
        python setup.py install

Here is the code I'm using for testing:

import keras
from keras.models import Sequential
from keras.layers import Conv2D
from webdnn.frontend.keras import KerasConverter
from webdnn.backend import generate_descriptor

model = Sequential()
model.add(Conv2D(16, 4, strides=1, padding='same', input_shape=(256, 256, 3)))

graph = KerasConverter(batch_size=1).convert(model)
exec_info = generate_descriptor('webgl', graph)
exec_info.save('./output')

The code stalls on generate_descriptor and never completes.

I think the problem is with padding='same'? I found a reference to this problem in issue #796

Thanks for reporting.
This problem was not about padding but optimization for webgl.
It is now fixed in master branch.