ValueError: too many values to unpack (expected 2)in hdf5_images.py
Opened this issue · 5 comments
1.download LLD-logo.hdf5(13GB)
2.change the path, python hdf5_images.py
3.error:
Traceback (most recent call last):
File "hdf5_images.py", line 45, in <module>
train_gen, valid_gen = load(64)
ValueError: too many values to unpack (expected 2)
4.try a lot, it did't work.why???my hair is gone.
Hi, my logo dataset has a slightly different format as it contains some meta information as well as the images and labels. To train with it you need to specify DATA_LOADER="lld-logo"
.
Sorry for the confusion, unfortunately I never really had time to complete the documentation of this code... You can see the available options for the data loader at logo_wgan.py:198.
thanks a lot. I am doing the logo generator.I think your github and paper is very useful to me. I'll continue to try.there are two ways. one is to use your model. the other is retrain . but I have many errors . I will try, do it better.
I start again. I use python2.7,tensorflow1.3.0,and the same packages.I want to restore your model to generate some logos.
I run this
`import tensorflow as tf
import numpy as np
import vector
from logo_wgan import WGAN
session = tf.Session()
wgan = WGAN(session, load_config='LLD-logo-rc_64')
print('go on vector')
vec = vector.Vector(wgan)
vec.show_random()`
2019-12-18 16:02:16.667807: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2019-12-18 16:02:16.667828: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2019-12-18 16:02:16.667833: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2019-12-18 16:02:16.667837: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2019-12-18 16:02:16.667856: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Settings dict:
ACGAN: 0
ACGAN_SCALE: 1.0
ACGAN_SCALE_G: 0.1
ARCHITECTURE: resnet-64
BATCH_SIZE: 64
CONDITIONAL: 1
DATA: LLD-logo.hdf5
DATA_LOADER: lld-logo
DECAY: 1
DIM_D: 64
DIM_G: 64
GEN_BS_MULTIPLE: 2
INCEPTION_FREQUENCY: 0
ITERS: 100000
KEEP_CHECKPOINTS: 5
LABELS: labels/resnet/rc_64
LAMBDA: 10
LAYER_COND: 1
LR: 0.0002
MODE: wgan-gp
NORMALIZATION_D: 0
NORMALIZATION_G: 1
N_CRITIC: 5
N_GENERATOR: 3
N_GPUS: 1
N_LABELS: 64
OUTPUT_DIM: 12288
OUTPUT_RES: 64
RUN_NAME: LLD-logo-rc_64
SUMMARY_FREQUENCY: 1
bn_init: False
train: False
go on vector
Traceback (most recent call last):
File "generate.py", line 10, in
vec.show_random()
File "/home/xhz/work/infor/logo-gen-master2/wgan/vector.py", line 52, in show_random
self.show_z(z, y, shape=shape, border=border, enum=enum, res=res, save=save)
File "/home/xhz/work/infor/logo-gen-master2/wgan/vector.py", line 152, in show_z
self.show(self.sample_z(z, y), shape=shape, enum=enum, border=border, res=res, save=save)
File "/home/xhz/work/infor/logo-gen-master2/wgan/vector.py", line 110, in sample_z
samples = self.wgan.sample(z_i, y_i)
File "/home/xhz/work/infor/logo-gen-master2/wgan/logo_wgan.py", line 262, in sample
self._init_sampler()
File "/home/xhz/work/infor/logo-gen-master2/wgan/logo_wgan.py", line 247, in _init_sampler
self.sampler = self.Generator(self.cfg, n_samples=0, labels=self.y, noise=self.z, is_training=self.t_train)
File "/home/xhz/work/infor/logo-gen-master2/wgan/tflib/architectures.py", line 127, in Generator_Resnet_64
output = ResidualBlock(cfg, 'Generator.Res1', 8dim, 8dim, 3, output, resample='up', labels=labels, is_training=is_training)
File "/home/xhz/work/infor/logo-gen-master2/wgan/tflib/ops/gan_ops.py", line 132, in ResidualBlock
shortcut = conv_shortcut(name+'.Shortcut', input_dim=input_dim, output_dim=output_dim, filter_size=1, he_init=False, biases=True, inputs=inputs)
File "/home/xhz/work/infor/logo-gen-master2/wgan/tflib/ops/gan_ops.py", line 98, in UpsampleConv
output = lib.ops.conv2d.Conv2D(name, input_dim, output_dim, filter_size, output, he_init=he_init, biases=biases)
File "/home/xhz/work/infor/logo-gen-master2/wgan/tflib/ops/conv2d.py", line 111, in Conv2D
data_format='NHWC'
File "/home/xhz/.virtualenvs/python2/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 397, in conv2d
data_format=data_format, name=name)
File "/home/xhz/.virtualenvs/python2/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/home/xhz/.virtualenvs/python2/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2632, in create_op
set_shapes_for_outputs(ret)
File "/home/xhz/.virtualenvs/python2/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1911, in set_shapes_for_outputs
shapes = shape_func(op)
File "/home/xhz/.virtualenvs/python2/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1861, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/home/xhz/.virtualenvs/python2/local/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 595, in call_cpp_shape_fn
require_shape_fn)
File "/home/xhz/.virtualenvs/python2/local/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 659, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Dimensions must be equal, but are 8 and 512 for 'Generator.Res1.Shortcut/Conv2D' (op: 'Conv2D') with input shapes: [?,512,8,8], [1,1,512,512].
I use cpu.not the gpu, I don't have enough memory.so I just change one code. I changed NCHW to NHWC.In fact, when I tried to train myself in the morning, I also encountered this similar problem.
I wonder if I can get the logo like this.Can you give me some advice?
It could be possible to convert the network to NHWC and use the CPU but I've never tried. The problem is that once you change it, the checkpoint data (pretrained weights) doesn't fit the model anymore, as the data you're trying to load is still NCHW. So I guess it's not quite as straight forward... Also, you need to change the dimensions in many places in the code and it's easy to forget one.
thanks very much, you means if I use cpu, I need to change the dimensions in many places? if I use gpu, I don't need to change the dimensions.if I retrain,should I change the dims?I think if your code fit the input 32x32,I use the LLD_logo.hdf5,I need change the dims.if I use the LLD_icon.hdf5,I can use it.I will verify it.