Question on FDK reconstruction
Closed this issue · 7 comments
Hello, its a great work on real CBCT data. But I have questions on the FDK reconstruction code. In your code, you use proj_geom = astra.create_proj_geom('cone_vec', projs_rows, projs_cols, vecs)
to create projection geometry in astra toolbox. But I use proj_geom = astra.create_proj_geom('cone', detector_spacing_x, detector_spacing_y, projs_rows, projs_cols, angles, source_origin, origin_det)
and get quite different results. If I use v2 (middle source position), I could get a decent but worse result. For other two position, then it completely failed. Could you help me? My code is attached bellow. Mant thx!!!
import numpy as np
import astra
import os
import imageio.v2 as imageio
import time
import matplotlib.pyplot as plt
import SimpleITK as sitk
#### user defined settings #####################################################
# select the ID of the sample you want to reconstruct
walnut_id = 1
# select also the orbit you want to reconstruct the data from:
# 1 higher source position, 2 middle source position, 3 lower source position
orbit_id = 2
# define a sub-sampling factor in angular direction
# (all reference reconstructions are computed with full angular resolution)
angluar_sub_sampling = 1
# select of voxels per mm in one direction (higher = larger res)
# (all reference reconstructions are computed with 10)
# we chose a volume of 501^3 voxels of size 100µm^3
voxel_per_mm = 10
# we enter here some intrinsic details of the dataset needed for our reconstruction scripts
# set the variable "data_path" to the path where the dataset is stored on your own workstation
data_path = './Walnuts/'
# set the variable "recon_path" to the path where you would like to store the
# reconstructions you compute
recon_path = './WalnutsOwnReconstructions/'
#### load data #################################################################
t = time.time()
print('load data', flush=True)
# we add the info about walnut and orbit ID
data_path_full = os.path.join(data_path, 'Walnut{}'.format(walnut_id), 'Projections', 'tubeV{}'.format(orbit_id))
projs_name = 'scan_{:06}.tif'
dark_name = 'di000000.tif'
flat_name = ['io000000.tif', 'io000001.tif']
vecs_name = 'scan_geom_corrected.geom'
projs_rows = 972
projs_cols = 768
# load the numpy array describing the scan geometry from file
vecs = np.loadtxt(os.path.join(data_path_full, vecs_name))
# get the positions we need; there are in fact 1201, but the last and first one come from the same angle
vecs = vecs[range(0,1200, angluar_sub_sampling)]
# projection file indices, we need to read in the projection in reverse order due to the portrait mode acquision
projs_idx = range(1200,0, -angluar_sub_sampling)
n_pro = vecs.shape[0]
# create the numpy array which will receive projection data from tiff files
projs = np.zeros((n_pro, projs_rows, projs_cols), dtype=np.float32)
# transformation to apply to each image, we need to get the image from
# the way the scanner reads it out into to way described in the projection
# geometry
trafo = lambda image : np.transpose(np.flipud(image))
# load flat-field and dark-fields
# there are two flat-field images (taken before and after acquisition), we simply average them
dark = trafo(imageio.imread(os.path.join(data_path_full, dark_name)))
flat = np.zeros((2, projs_rows, projs_cols), dtype=np.float32)
for i, fn in enumerate(flat_name):
flat[i] = trafo(imageio.imread(os.path.join(data_path_full, fn)))
flat = np.mean(flat,axis=0)
# load projection data
for i in range(n_pro):
projs[i] = trafo(imageio.imread(os.path.join(data_path_full, projs_name.format(projs_idx[i]))))
print(np.round_(time.time() - t, 3), 'sec elapsed')
### pre-process data ###########################################################
t = time.time()
print('pre-process data', flush=True)
# subtract the dark field, divide by the flat field, and take the negative log to linearize the data according to the Beer-Lambert law
projs -= dark
projs /= (flat - dark)
np.log(projs, out=projs)
np.negative(projs, out=projs)
projs = np.clip(projs, 0, projs.max())
# permute data to ASTRA convention
projs = np.transpose(projs, (1,0,2))
projs = np.ascontiguousarray(projs)
print(np.round_(time.time() - t, 3), 'sec elapsed')
### compute FDK reconstruction #################################################
t = time.time()
print('compute reconstruction', flush=True)
# size of the reconstruction volume in voxels
vol_sz = 3*(50 * voxel_per_mm + 1,) # volume size 501^3
# size of a cubic voxel in mm
vox_sz = 1/voxel_per_mm # voxel size 0.1 mm / voxel
# we need to specify the details of the reconstruction space to ASTRA
# this is done by a "volume geometry" type of structure, in the form of a Python dictionary
# by default, ASTRA assumes a voxel size of 1, we need to scale the reconstruction space here by the actual voxel size
vol_geom = astra.create_vol_geom(vol_sz)
# we need to specify the details of the projection space to ASTRA
# this is done by a "projection geometry" type of structure, in the form of a Python dictionary
sad = 66 / vox_sz
sid = 199 / vox_sz
source_origin = sad
origin_det = sid-sad
# detector_spacing_x = vol_sz[0]/projs_cols * (sid/sad) # 1.5541
# detector_spacing_y = vol_sz[2]/projs_rows * (sid/sad) # 1.9669
detector_spacing_x = 0.1496 / vox_sz # 1.496
detector_spacing_y = 0.1496 / vox_sz
angles = []
cur_angles = 0
for i in range(1200):
angles.append(np.deg2rad(cur_angles))
cur_angles = cur_angles + 0.3
proj_geom = astra.create_proj_geom('cone', detector_spacing_x, detector_spacing_y, projs_rows, projs_cols, angles, source_origin, origin_det)
# register both volume and projection geometries and arrays to ASTRA
vol_id = astra.data3d.create('-vol', vol_geom)
proj_id = astra.data3d.create('-proj3d', proj_geom, projs)
# finally, create an ASTRA configuration.
# this configuration dictionary setups an algorithm, a projection and a volume
# geometry and returns a ASTRA algorithm, which can be run on its own
cfg_fdk = astra.astra_dict('FDK_CUDA')
cfg_fdk['ProjectionDataId'] = proj_id
cfg_fdk['ReconstructionDataId'] = vol_id
cfg_fdk['option'] = {}
cfg_fdk['option']['ShortScan'] = False
alg_id = astra.algorithm.create(cfg_fdk)
# run FDK algorithm
astra.algorithm.run(alg_id, 1)
# get data
vol_rec = astra.data3d.get(vol_id)
# release memory allocated by ASTRA structures
astra.algorithm.delete(alg_id)
astra.data3d.delete(proj_id)
astra.data3d.delete(vol_id)
print(np.round_(time.time() - t, 3), 'sec elapsed')
### save reconstruction ########################################################
t = time.time()
print('save results', flush=True)
# construct full path for storing the results
recon_path_full = os.path.join(recon_path, 'Walnut{}_cone'.format(walnut_id))
# create the directory in case it doesn't exist yet
if not os.path.exists(recon_path_full):
os.makedirs(recon_path_full)
# Save every slice in the volume as a separate tiff file
# for i in range(vol_sz[0]):
# slice_path = os.path.join(recon_path_full, 'fdk_pos{}_ass{}_vmm{}_{:06}.tiff'.format(orbit_id,
# angluar_sub_sampling, voxel_per_mm, i))
# imageio.imwrite(slice_path, vol_rec[i,...])
sitk.WriteImage(sitk.GetImageFromArray(vol_rec), os.path.join(recon_path_full, 'recon_pos{}.nii.gz'.format(orbit_id)))
print(np.round_(time.time() - t, 3), 'sec elapsed')
BTW, do you know any other real CBCT scan? I really need it!
The 'cone'
geometry assumes an idealized CBCT geometry (source and detector rotate around the object in two concentric circles that lie in the same plane, the source-detector-axis goes through the center, the detector normal is pointing to the center, the horizontal and vertical detector lines are aligned with the rotation plane and rotation axis, respectively. For real experimental CBCT setups, this is not true and using the 'cone'
geometry will not work. In particular, data sets V1 and V3 have a very different geometry. Therefore, we provided the calibrated geometric description of each projection as a 'cone_vec'
geometry. (cf. the sections "Projection geometry and acquisition parameters" and "Projection Data" in the paper).
More CBCT data from our lab can be found on zenodo
Many thanks for your reply. I have solved this problem. In real-captured CBCT data, cone_vec
will be more useful.
I wanna ask for some other large scale datatsets (dozens of samples) for deep learning model training. The rests CBCT datasets on zenodo are too small (only one or two). Thanks in advance.
I don't know any such data sets from other groups. From our group, there is also this one (131 scans)
Many thanks to your help and advice!