tmquan/RefineGAN

I have some questions about the images generated by the reconstruction.

Alxemade opened this issue ยท 9 comments

Hello, I run you code on the github, but I have some questions about it. First, why do you set DIMZ = 2 in your code, RefineGAN/Utils.py, is that means the netword you create is complex? Second I follow your running steps and run your code on my own datasets, but the reconstruction is comfusing. image
the shape is (1,2,256,3072)what the number meaning?

Also, image

the tiff image is unormal, I can't understant.I was hoping you could help me with this problem, I can't get it right.

Hi @Alxemade,

2-channel image is a data structure to represent a complex-valued image. I provided handy functions to convert 2-channel to complex value using tf_complex and tf_channel in Utils.py

def tf_complex(data, name='tf_channel'):
	with tf.variable_scope(name+'_scope'):
		real  = data[:,0:1,...]
		imag  = data[:,1:2,...]
		del data
		data  = tf.complex(real, imag) 
	data = tf.identity(data, name=name)
	return data	

def tf_channel(data, name='tf_complex'):
	with tf.variable_scope(name+'_scope'):
		real  = tf.real(data)
		imag  = tf.imag(data)
		real  = real[:,0:1,...]
		imag  = imag[:,0:1,...]
		del data
		data  = tf.concat([real, imag], axis=1)
	data = tf.identity(data, name=name)
	return data

The log stated that the shape is (1,2,256,3072) which means the size of the tensor I visualize on the tensorboard. You can feel free to comment it out.

Did you run with the provided data? If yes, you can just simply replace the directory by your own data. Please make sure the data range is in between 0~255.

Hi@tmquan,
Thank you for your answer! You say the brain data is used for magnitude-value experiment and the knees data is used for complex-value experiment. Here, I run my own MRI datasets and have already make sure the data range is in between 0~255, the shape is (256,256), which is real-value image not the complex image, should I need to use 2-channel image and set the DIMZ = 2? Also, if I have the MRI datasets which the training datasets size is 15000, I doubt the training time is too long for me, need I change which code to fit my datasets?

In Utils.py the code does read the image automatically and convert to 2 channel image (real = magnitude, imaginary=0), you dont need to do that. That means you just leave everything as default and only change the image directory accordingly.

if image.ndim == 2:
	image = np.stack((image, np.zeros_like(image)), axis=0)
if mask.ndim == 2:
	mask = np.stack((mask, np.zeros_like(mask)), axis=0)
if label.ndim == 2:
	label = np.stack((label, np.zeros_like(label)), axis=0)

If datasets size is 15000, please set EPOCH_SIZE=15000
I am afraid of that you have to train it to fit your own dataset. I haven't updated the MultiGPU training version. Will do that but not the current moment.

Please let me know how it goes.

Hi, My training data likes this,
image , is that ok?
My brain training data is png image, and generate test image is tiff image, I can not show the tiff image correctly, can you help me how to generate the png image instead of TIFF image?

for idx, o in enumerate(pred.get_result()):
		print pred
		print len(o)
		print o[0].shape

		outA = o[0][:, :, :, :] 

	
		colors0 = np.array(outA) #.astype(np.uint8)
		head, tail = os.path.split(filenames[idx])
		tail = tail.replace('png', 'tif')
		print tail
		print colors0.shape
		print colors0.dtype
		import skimage.io
	
		skimage.io.imsave(resultDir+ "/full_"+tail, np.squeeze(colors0[...,256*1:256*2])) # Zerofill
		skimage.io.imsave(resultDir+"/zfill_"+tail, np.squeeze(colors0[...,256*2:256*3])) # Zerofill
		skimage.io.imsave(resultDir+tail, np.squeeze(colors0[...,256*4:256*5])) # Zerofill

		skimage.io.imsave(resultDir+"mag/mag_"+tail, np.abs(np_complex(np.squeeze(colors0[...,256*4:256*5]))))
		skimage.io.imsave(resultDir+"ang/ang_"+tail, np.angle(np_complex(np.squeeze(colors0[...,256*4:256*5]))))


		skimage.io.imsave(resultDir+"/M/mag/mag_"+tail, np.abs(np_complex(np.squeeze(colors0[...,256*2:256*3]))))
		skimage.io.imsave(resultDir+"/M/ang/ang_"+tail, np.angle(np_complex(np.squeeze(colors0[...,256*2:256*3]))))

Should I change where?

In the above code, Can you put this?

print tail
print colors0.shape
print colors0.dtype
print colors0.max()
print colors0.min()

And let me know the results?

In addition, can you show the tensorboard result in the image tab?

The scalars ๐Ÿ‘
image

The reconstructions:
image
image
image
image

As you can see, the reconstruction is fine according to the tensorboard. Its type is np.float32
In order to view the image by your viewer, you have to cast color0 to np.uint8 or use Fiji/ImageJ to open the current np.float32 tif files.

Thank you for answering my question all the time. I download the imagej and show the image correctly!
image.

Follow you advice, I use colors0 = np.array(outA).astype(np.uint8), and the reconstruction image type is unit8, so how can I to presever the image in png format? If I remove the code tail = tail.replace('png', 'tif') is wrong?What is the role of this code?

Since I have to make the quantitative evaluation, I would save the image in np.float32 (rather than np.uint8 which is supported by png) and only tif format in skimage.io can preserve np.float32. The role of that line of code is just changing the format from png to tif for saving, nothing fancy.

If you quantize to np.uint8, then the source of errors will increase which you do not want to.

I will close this issue for now. Thank you for your feedback.