Initializing a DIMES Client in DataSpaces mode
Opened this issue · 0 comments
In
File: common_dataspaces.c
Function: int common_dspaces_init(int num_peers, int appid, void *comm, const char *parameters)
Relevant Code:
#ifdef DS_HAVE_DIMES
dimes_c = dimes_client_alloc(dcg);
if (dimes_c == NULL) {
uloga("%s(): failed to init DIMES.\n", __func__);
return err;
}
#endif
While the user probably is none the wiser, we are actually executing code from dimes_client.c
(specifically the function dimes_client_alloc
). Running DataSpaces in debug mode, one actually receives the message:
'dc_alloc(3:2:45830:701)': init ok.
dimes_client_alloc(): OK.
While this part is wrapped in a debug flag, the part about calling dimes_client_alloc
occurs either way. Basically, DS_HAVE_DIMES
is just universally set to 1 when someone compiles DataSpaces with the flag --enable-dimes
(see config.log after compiling #define DS_HAVE_DIMES 1
).
However, when the code is running in DataSpaces mode, we shouldn't need to be calling code for a dimes client. As it stands from the code, it looks like we always alloc both ds and dimes if a user has compiled with DIMES and then just use common_put_sync
for dimes puts instead of just common_put
, which I assume means that DataSpaces and DIMES are always running together if the user has compiled with DIMES and just DIMES is waiting for local puts/updates.
Even if there are no code modifications needed or desired, we should revisit the documentation of this -- whether we want --enable-dimes
to imply that anytime you run our code, you are running DataSpaces + DIMES together.