dss-extensions/dss_capi

v8+ to v7: porting remaining features

PMeira opened this issue · 7 comments

Right now v8 contains many features that I or my group would ever use. The list of features of OpenDSS is extensive, so that's expected. Still, porting remaining features will allow a proper comparison to decide whether we should completely drop v7 and use v8 exclusively.

If porting mostly everything is feasible, objective tests can be run to decide this. Ideally, porting the core PM code would allow using an existing, mature scheduler for distributed computing.

  • Explicity enumerate remaining differences (mostly very new code and PM-related)
  • Assess the feasibility of porting the code to v7

This will be evaluated after the next official OpenDSS release, since there are still on-going changes in the SVN.

Besides the GUI changes and the actor management, the most notable differences between v7 and v8 seem to be:

  • some small differences (mostly small correction from older revisions) that I already mentioned somewhere else
  • Executive/Executive: ClearAll, AllActors
  • Executive/ExecOptions: ADiakoptics option
  • Executive/ExecCommands: Expose ADiakoptics and related commands
  • Meters/Monitor: ConcatenateReports
  • Meters/EnergyMeter: files
  • Common/DSSGlobals: files
  • Common/Solution: actual actor threads (TSolver)

To make it easier to compare the code of the two versions, I already ported some code from v8 to v7 today. To simplify the code comparison even further, ConcatenateReports and memory mapping of files could be ported too, even if not useful right away.

The diakoptics algorithm per se can be ported too, but not running the solution. I don't like how METIS is integrated right now (by running a external executable) but that can be easily improved later. METIS can be built using CMake -- since we already use CMake to build DSS C-API's version of KLUSolve, it should be easy to integrate it to the build process.

If anyone finds this comment by chance and wonders why I'm looking into this, there are two main reasons:

  • The v8 codebase is not terrible or anything, but it introduces a lot of cognitive overhead and potential traps that I'm not sure are necessary.
  • Since my target is distributed computing, not only running in a single machine, I have some ideas for retargeting OpenDSS to use process instead of threads.

Besides migrating the actors to dedicated processes, using thread-local variables in select places might be an alternative. Creation of processes on Windows have a higher overhead that should be considered but, since the use of parallel computing itself implies a heavy/long computation, it should not be a problem as long as the processes are reused instead of killed for every run.

This is on hold, waiting for the upstream diakoptics code to stabilize.

Recently introduced changes not yet ported:

  • 0d41190 - "UCF microgrid controller added in V8. Validation pending"
  • 550c1a2 - "New charging mode in storage controller implemented in V8. The new control mode provided by Valentin Rigoni"

EDIT: I partially reverted 0d41190, as it was causing severe issues. I'll merge the changes back when they're fixed/released upstream.

Diakoptics and parallel features in v7 depend mostly on:

Besides the new features in the official OpenDSS that require third-party or closed-source tools, 108f7f9 adds the base for the new implementation. We should enable the PM functions as soon as the v0.13 branch is created, which will toggle several features and move to KLUSolveX. EDIT: some features moved back to v0.12

PR with the general PM functions coming soon. The code related to diakoptics was rewritten recently in the official OpenDSS codebase:

  • document updated in Nov/2021: https://sourceforge.net/p/electricdss/code/3364/tree/trunk/Version8/Distrib/Doc/A-Diakoptics_Suite.pdf
  • for an overview:
    Function Solve_Diakoptics():Integer;
    var
    i, myRow : Integer;
    Vpartial : TSparse_Complex;
    Begin
    {Space left empty to implement the simplified Diakoptics algorithm}
    With ActiveCircuit[1], ActiveCircuit[1].Solution do
    Begin
    // Solves the partial systems to find the voltages at the edges of the sub-systems
    SendCmd2Actors(SOLVE_AD1);
    Vpartial := Tsparse_Complex.Create;
    Vpartial.sparse_matrix_Cmplx(Contours.NCols,1);
    // Does the voltage diff calculation using the partial results
    myRow := 0;
    for i := 0 to (Contours.NCols - 1) do
    begin
    VPartial.insert(i, 0, csub(NodeV^[Contours.CData[myRow].Row + 1],NodeV^[Contours.CData[myRow + 1].Row + 1]));
    myRow := myRow + 2;
    end;
    // Loads the partial solution considering the previous iteration
    Vpartial := Y4.multiply(VPartial);
    Ic := Contours.multiply(VPartial); // Calculates the new Injecting Currents
    // Commands the actors to complement the solution
    SendCmd2Actors(SOLVE_AD2);
    End;
    ActiveCircuit[1].Issolved := True;
    ActiveCircuit[1].BusNameRedefined := False;
    if SolutionAbort then ActiveCircuit[1].Issolved := False;
    ActiveActor := 1; // Returns the control to Actor 1
    Result := 0;
    Vpartial.Free;
    End;

Diakoptics will be left disabled for a while since there are a few points from the official implementation that don't fit well with our approach in DSS C-API and there has been no explicit user requests about it. I believe we have some time to plan a different implementation to go with our different PM approach, even it this happens after we migrate to another programming language.

Some points to handle to finish porting:

  • The METIS EXE is used. This was already addressed here in 2019 through KLUSolveX, but pending more testing (and now some updates).
  • The subcircuits are always written to disk -- I don't think this is reasonable for large scale usage. The rewritten property system (also coming in the PR) should allow us to copy or move the circuit elements without too much trouble, but even generating to memory streams by default (disk as an option) should be a bit better.
  • All actors access the voltage vector from the initial thread and this requires that the code in all models be updated to do that (something similar is done with ActiveCircuit[1].Ic, but this one is more clearly isolated), e.g.
    if not ADiakoptics or (ActorID = 1) then
    Vterminal^[i] := NodeV^[NodeRef^[i]]
    else
    Vterminal^[i] := VoltInActor1(NodeRef^[i]);
  • Since even in the current codebase we've been pushing some code to the C++ level in KLUSolveX (and ultimately Eigen), I'd very much like to drop Sparse_Math.pas. Eigen and other libraries have much more developer resources, are more tested and performant, and GraphBLAS implementations have been gaining ground.

Better isolation and explicit points of synchronization should also allow using distributed computing and multi-processing instead of just multi-threading. It might be possible to implement the whole method as a plug-in, which would be great for maintainability.

If there are enough differences in the implementation, a document should be added later to the new docs repository detailing our approach.