A modern, high performance, open source system dynamics engine for
today's web and tomorrow's. sd.js
runs in all major browsers
(Edge, Chrome, Firefox, Safari), and on the server under
node.js.
Running a model, displaying a stock and flow diagram, and visualizing results on that diagram are all straightfoward.
var drawing, sim;
sd.load('/predator_prey.xmile', function(model) {
// create a drawing in an existing SVG on the current page. We
// can have multiple diagrams for the same model, with different
// views of the same underlying data.
drawing = model.drawing('#diagram');
// create a new simulation. There can be multiple independent
// simulations of the same model, running in parallel.
sim = model.sim();
// change the size of the initial predator population
sim.setValue('lynx', 10000);
sim.runToEnd().then(function() {
// after completing a full run of the model, visualize the
// results of this simulation in our stock and flow diagram.
drawing.visualize(sim);
}).done();
});
Javascript code is dynamically generated and evaluated using modern web browser's native just-in-time (JIT) compilers. This means simulation calculations run as native machine code, rather than in an interpreter, with industry-leading speed as a result.
sd.js
code is offered under the MIT license. See LICENSE for
detials. sd.js
is built on Snap.svg (Apache
licensed), and mustache.js (MIT
licensed). These permissive licences mean you can use and build upon
sd.js
without concern for royalties.
sd.js
is built on
XMILE,
the emerging open standard for representing system dynamics models.
When the standard is complete, sd.js
will aim for full conformance,
making it possible to use models created in desktop software platforms
on the web, and models created or modified in sd.js
on the desktop.
GNU Make, node.js and yarn are required to build sd.js
, as are some
standard unix utilities. The build should work on Windows with yarn.
Once those are installed on your system, you can simply run make
(which simply ensures yarn install
has run and wraps yarn build
)
to build the library for node and the browser:
[bpowers@vyse sd.js]$ make
YARN
yarn install v1.1.0
[1/5] Validating package.json...
[2/5] Resolving packages...
success Already up-to-date.
Done in 0.36s.
YARN sd.js
yarn run v1.1.0
$ npm-run-all build:pre build:runtime0 build:runtime1 -p build:lib build:build -s build:browser
build/sd.js → sd.js...
created sd.js in 1.6s
Done in 7.35s.
Run make test
to run unit tests, and make rtest
to run regression
tests against the XMILE models in the
SDXOrg/test-models repository:
[bpowers@vyse sd.js]$ make test rtest
TS test
TEST
lex
✓ should lex a
[...]
✓ should lex "hares" * "birth fraction"
32 passing (16ms)
RTEST test/test-models/tests/number_handling/test_number_handling.xmile
RTEST test/test-models/tests/lookups/test_lookups_no-indirect.xmile
RTEST test/test-models/tests/logicals/test_logicals.xmile
RTEST test/test-models/tests/if_stmt/if_stmt.xmile
RTEST test/test-models/tests/exponentiation/exponentiation.xmile
RTEST test/test-models/tests/eval_order/eval_order.xmile
RTEST test/test-models/tests/comparisons/comparisons.xmile
RTEST test/test-models/tests/builtin_min/builtin_min.xmile
RTEST test/test-models/tests/builtin_max/builtin_max.xmile
RTEST test/test-models/samples/teacup/teacup.xmile
RTEST test/test-models/samples/teacup/teacup_w_diagram.xmile
RTEST test/test-models/samples/bpowers-hares_and_lynxes_modules/model.xmile
RTEST test/test-models/samples/SIR/SIR.xmile
The standalone sd.js library for use in the browser is available at
sd.js
and includes all required dependencies (Snap.svg and
Mustache). For use under node, require('sd.js')
to simply use the
CommonJS modules built in the lib/
directory from the original
TypeScript sources.
- ability to save XMILE docs
- ignore dt 'reciprocal' on v10 and < v1.1b2 STELLA models
- intersection of arc w/ rectangle for takeoff from stock
- intersection of arc w/ rounded-rect for takeoff from module
- logging framework
- parse equations - should be pretty similar to XMILE, except logical
ops are
:NOT:
,:OR:
, etc. - determine types (stocks and flows aren't explicitly defined as
such. Stocks can be determined by use of the
INTEG
function, and flows are variables that are referenced inside ofINTEG
functions. - read display section
- read style
- convert elements to XMILE display concepts.
- diagram changes (minimal)
- figure out if it makes sense to do single-dimensional first, or multi- dimensional from the start
- parser:
- array reference/slicing
- transpose operator
- semantic analysis/validation:
- validate indexing
- validate slicing
- transpose using dimension names
- transpose using positions
- array slicing (
A[1, *]
)
- optimization?
- the simple thing to do is have a nest of for loops for each individual variable. This is simple, I worry it will be slow for large models (which are very common users of arrays). If we're doing operations on multiple variables with the same dimensions in a row, we can merge them into a single loop. This is straightforward logically, but there is no optimization framework in place yet, so that would need to be added.
- codegen:
- apply-to-all equations
- nested for loop
- non-A2A equations
- non-A2A graphical functions
- array slicing (
A[1, *]
) - transpose using dimension names
- transpose using positions
- apply-to-all equations
- runtime:
- know about defined dimensions + their subscripts
- allocate correct amount of space for arrayed variables
- array builtins:
MIN
,MEAN
,MAX
,RANK
,SIZE
,STDDEV
,SUM
. - be able to enumerate all subscripted values for CSV output
- be able to return results for
arrayed_variable[int_or_named_dimension]
- right now, the runtime is pretty dead-simple. Every builtin
function expects one or more numbers as input. With the array
builtins, this is no longer the case.
- index into arrays with non-constant offsets:
constants[INT(RANDOM(1, SIZE(foods)))]
- create slices of arrays:
SUM(array[chosen_dim, *])
, wherechosen_dim
is an auxiliary variable. - this means runtime type checking, and I think runtime memory allocation (right now memory is allocated once, in one chunk, when a simulation is created, which is fast and optimial).
- index into arrays with non-constant offsets:
After updating the package.json
and installing all dependencies using pnpm
, I can build the library using the provided npm
scripts:
npm run build:pre
to create the foldersnpm run build:runtime0
npm run build:runtime1
npm run build:lib
npm run build:build
On Windows, I don't have make
installed, so I've used wsl
v2 (Windows Subsystem for Linux) to run the regression tests. After installing nodejs
on wsl
using:
curl -sL https://deb.nodesource.com/setup_15.x | sudo -E bash -
sudo apt-get install -y nodejs
The tests can be run as follows (NOTE: I needed to change the line-endings of the ./bin/mdl.js
to UNIX. One way is to open the file in vscode
, open settings, search for files:eol
, and use \\n
to save the file. Next, put it back to auto
again).
make test rtest