Spherical average doesn't interact well with baselines that have different times
Opened this issue · 0 comments
After I've done incoherent time averaging on a UVPspec object, my blts axis is the size of the number of baselines (each of which has one "time"). However, if not all the times are the same, spherical averaging doesn't combine them correctly. As an example, if I do this:
uvp_tavg = copy.deepcopy(uvp_tavg_1_to_5)
sph_avgs = []
for spw in tqdm(uvp.spw_array):
dk_multiplier = 2.0 # the size of each spherical k bin (Delta k) when multiplied by the natural k_para spacing
k_start_multiplier = .75 # the center of first spherical k bin in units of Delta k
dk = dk_multiplier * np.median(np.diff(uvp_tavg.get_kparas(spw)))
kbins = np.arange(k_start_multiplier * dk, 2.5, dk) # even spacing
uvp_tavg_this_spw = uvp_tavg.select(spws=[spw], inplace=False)
sph_avgs.append(hp.grouping.spherical_average(uvp_tavg_this_spw, kbins, dk, error_weights='P_N'))
print(uvp_tavg.data_array[spw].shape, sph_avgs[-1].data_array[0].shape)
I get
(801, 99, 2) (30, 45, 2)
(801, 91, 2) (30, 38, 2)
(801, 105, 2) (30, 40, 2)
(801, 135, 2) (30, 43, 2)
(801, 89, 2) (30, 27, 2)
(801, 81, 2) (30, 23, 2)
(801, 87, 2) (30, 24, 2)
(801, 130, 2) (30, 34, 2)
(801, 114, 2) (30, 29, 2)
(801, 139, 2) (30, 34, 2)
(801, 116, 2) (30, 27, 2)
(801, 61, 2) (30, 14, 2)
The number of blts is the number of unique baselines (801), but it doesn't reduce to 1... it reduces to 30.
Probably related, np.unique(uvp_tavg_1_to_5.time_1_array).shape = (15,)
However, if I force the integrations to have the same time in a rather hacky way, by doing the following:
uvp_tavg.time_avg_array[:] = np.median(uvp_tavg.time_avg_array)
uvp_tavg.time_1_array[:] = np.median(uvp_tavg.time_avg_array)
uvp_tavg.time_2_array[:] = np.median(uvp_tavg.time_avg_array)
uvp_tavg.lst_avg_array[:] = np.median(uvp_tavg.lst_avg_array)
uvp_tavg.lst_1_array[:] = np.median(uvp_tavg.lst_avg_array)
uvp_tavg.lst_2_array[:] = np.median(uvp_tavg.lst_avg_array)
uvp_tavg.average_spectra(time_avg=True, error_weights='P_N', error_field=['P_SN'], inplace=True)
I get
(801, 99, 2) (1, 45, 2)
(801, 91, 2) (1, 38, 2)
(801, 105, 2) (1, 40, 2)
(801, 135, 2) (1, 43, 2)
(801, 89, 2) (1, 27, 2)
(801, 81, 2) (1, 23, 2)
(801, 87, 2) (1, 24, 2)
(801, 130, 2) (1, 34, 2)
(801, 114, 2) (1, 29, 2)
(801, 139, 2) (1, 34, 2)
(801, 116, 2) (1, 27, 2)
(801, 61, 2) (1, 14, 2)
which is what I expect. It also goes much faster.
There should probably be an option on hp.grouping.spherical_average
to handle this case, even if it's not the default behavior. Some sort of assume_time_averaged=True
...