Handing of non-integer sparse time function coordinates with MPI
deckerla opened this issue · 1 comments
deckerla commented
Consider the following MFE:
import devito as dv
import numpy as np
nx = 12
ny = 12
nz = 12
# extent_x = nx-1
# extent_y = ny-1
# extent_z = nz-1
for extent in (nx-1, 1):
extent_x = extent
extent_y = extent
extent_z = extent
g = dv.Grid(shape=(ny,ny,nz), extent=(extent_x,extent_y,extent_z))
coords = np.zeros((nx*ny,3))
cx = np.zeros(12*12)
cy = np.zeros(12*12)
cz = np.zeros(12*12)
dx = extent_x/(nx-1)
dy = extent_y/(ny-1)
dz = extent_z/(nz-1)
for ix in range(0,12):
my_x = ix*dx
for iy in range(0,12):
indx = ix*(12) + iy
my_y = iy*dy
cx[indx] = my_x
cy[indx] = my_y
cz[indx] = dz
coords[:,0] = cx
coords[:,1] = cy
coords[:,2] = cz
(x,y,z) = g.dimensions
f = dv.TimeFunction(name="f", grid=g)
stf = dv.SparseTimeFunction(name="stf", npoint=12*12, nt=2, grid=g, coordinates=coords)
op = dv.Operator([dv.Eq(f,1), stf.interpolate(expr=f)])
op.apply()
print(stf.data[1,:])
Running without MPI produces this:
cvx@cbox-lukedecker-mpi:~/.julia/dev/Devito/test$ DEVITO_MPI=0 python testrecs.py
Operator `Kernel` ran in 0.03 s
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
Operator `Kernel` ran in 0.01 s
[1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1.0000004 1. 1.
1. 1. 1. 1. 1. 1. 1.
1. 1. 1.0000004 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1.
1.0000004 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1.0000004 1.
1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1.0000004 1. 1. 1.
1. 1. 1. 1. 1. 1. 1.
1. 1.0000004 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1.0000004
1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1.0000004 1. 1.
1. 1. 1. 1. 1. 1. 1.
1. 1. 1.0000004 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1.
1.0000004 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1.0000004 1.0000004
1.0000004 1.0000004 1.0000004 1.0000004 1.0000
Running with MPI everything looks fine for the first case where each dimension's spacing is 1. But when the extent is default, so the spacing of x = 1/(nx-1) things get strange:
cvx@cbox-lukedecker-mpi:~/.julia/dev/Devito/test$ DEVITO_MPI=1 mpirun -n 2 python testrecs.py
Operator `Kernel` ran in 1.65 s
Operator `Kernel` ran in 1.65 s
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
Operator `Kernel` ran in 2.28 s
Operator `Kernel` ran in 2.28 s
[1.0000000e+00 1.0000000e+00 1.0000000e+00 1.0000000e+00 1.0000000e+00
1.0000000e+00 1.0000000e+00 1.0000000e+00 1.0000000e+00 1.0000000e+00
1.0000000e+00 1.0000004e+00 2.9802322e-07 2.9802322e-07 2.9802322e-07
2.9802322e-07 2.9802322e-07 2.9802322e-07 2.9802322e-07 2.9802322e-07
2.9802322e-07 3.2782555e-07 2.9802322e-07 3.2782566e-07 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
1.0000000e+00 1.0000000e+00 1.0000000e+00 1.0000000e+00 1.0000000e+00
1.0000000e+00 1.0000000e+00 1.0000000e+00 1.0000000e+00 1.0000000e+00
1.0000000e+00 1.0000004e+00]
[1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1.0000004 1. 1.
1. 1. 1. 1. 1. 1. 1.
1. 1. 1.0000004 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1.
1.0000004 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1.0000004 1.
1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1.0000004 1. 1. 1.
1. 1. 1. 1. 1. 1. 1.
1. 1.0000004]
deckerla commented
Here's some running info:
cvx@cbox-lukedecker-mpi:~/.julia/dev/Devito/test$ which mpirun
/opt/nvhpc/comm_libs/mpi/bin/mpirun
cvx@cbox-lukedecker-mpi:~/.julia/dev/Devito/test$ echo $DEVITO_ARCH
nvc
cvx@cbox-lukedecker-mpi:~/.julia/dev/Devito/test$ mpirun --version
mpirun (Open MPI) 3.1.5
Report bugs to http://www.open-mpi.org/community/help/