ipfs-inactive/interface-js-ipfs-core

Testing in the JS IPFS Project

travisperson opened this issue · 7 comments

I want to start a discussion around testing, particularly around some of the JavaScript projects (js-ipfs interface-ipfs-core js-ipfs-api), really anything that requires the use of an IPFS daemon running (consumes js-ipfsd-ctl).

I'm curious if there are any particular testing strategies someone is trying to drive in the js-ipfs project at the moment, or if it's more adhoc and we can start to discuss the way we want to approach testing here in this issue.

In the js-ipfs project there are four main testing suites that cover the four main interfaces of the project.

  • core
  • http
  • gateway
  • cli

I want to ultimately help contributors know when they should write tests in each suite, and provide documentation and tools to help everyone write great tests that provide value to the community and project.

All of the tests suites currently require running a full node by either instantiating the core module, or starting a daemon and talking to it through js-ipfs-api.

A great depiction of this can be found in the js-ipfs readme.

I took some time and benchmarked the node:* test commands for the js-ipfs project.

Test Run core http gateway cli Total
1 24.40s 113.33s 3.88s 491.06s 632.67s
2 23.52s 114.36s 5.13s 501.95s 644.96s
3 24.39s 114.07s 6.98s 491.91s 637.35s
4 24.98s 112.91s 5.07s 491.04s 634.00s
5 23.86s 113.06s 5.59s 490.48s 632.99s
6 23.86s 114.57s 5.29s 492.69s 636.41s
7 22.40s 113.37s 8.79s 493.52s 638.08s
8 23.83s 113.34s 9.97s 498.46s 645.60s
9 21.07s 114.43s 5.66s 482.90s 624.06s
10 22.11s 114.49s 4.60s 506.10s 647.30s
Avg 23.44s 113.80s 6.10s 494.01s 637.34s

From the table above we can see that the cli tests take the large majority of the time (though about 180s of this time is consumed by three tests, more info @ ipfs/jenkins#93).

The high test time makes sense given some details about what the cli tests are doing.

  • Most are executed twice, online (with a daemon through the http-api), offline (directly to core)
  • Commands are execute through a shell, executing a new process for each command

Some questions

The js-ipfs project has some sharness tests, though they haven't really been touched for years. It would appear that running cli test through node is the standard for the project. Should we remove the sharness tests?

Both the http-api and core are primarily tested through the interface-ipfs-core project. There are also some independent tests written out in the core and http-api folders. Should we strive to migrate these tests to the interface-ipfs-core project over time as features settle?

Do we see value in the cli tests as they are written at the moment, and do we believe we want to keep moving forward with the general approach currently loosely laid out in the tests?

/cc @diasdavid @victorbjelkholm

I will be responding shortly to this issue with some of my own thoughts.

Thanks for writing this analysis, @travisperson! I'm really excited to have some help on enhancing our testing game to boost dev productivity! (Please save my days from release dances!)

The js-ipfs project has some sharness tests, though they haven't really been touched for years. It would appear that running cli test through node is the standard for the project. Should we remove the sharness tests?

The original goal was to extract the sharness tests on go-ipfs and create it as a separate package so that those tests become part of the "compliance and hardness tests" of an IPFS implementation, testing it from the CLI API.

@chriscool started that work and @victorbjelkholm had a OKR last quarter with some developments (0.4). @victorbjelkholm mind sharing what was achieved so that no duplicated work is done?

Both the http-api and core are primarily tested through the interface-ipfs-core project. There are also some independent tests written out in the core and http-api folders. Should we strive to migrate these tests to the interface-ipfs-core project over time as features settle?

It depends on "what gets done first". Let me explain:

Context:

  • The interface-* tests are heavily inspired by the pattern set by https://github.com/maxogden/abstract-blob-store, where the lack of interfaces in JavaScript is mitigated by a battery of tests that can be run against multiple implementations to assert that the implementation behaves as expected.
  • This was super useful in the beginning as we have a goal that we will maintain of having js-ipfs core and js-ipfs-api with the same API
  • Eventually it evolved to become the SPEC of the Core API and now we have all the spec files and examples.md
  • However, it is clear that mixing tests and interface definition is a lot of work and it should require this much time to be spend in polishing and updating manually a spec + tests + 2x implementation.

What I mean by "what gets done first":

  • With the Flow adoption -- ipfs/js-ipfs#1260 --, we will be able to gen docs + interface definitions automatically (bliss if implemented correctly). It will superseed ipfs/js-ipfs#651
  • Until the above is finished, diluting the interface-ipfs-core tests is kind of premature

What I see happening in the future:

  • interface-ipfs-core/js/src tests will become the core-api testing suite
  • interface-ipfs-core/spec will be auto generated
  • both js-ipfs and js-ipfs-api will still have custom tests written as there are some nuances (for example, js-ipfs-api checking if the http headers are set correctly because it is a http client or testing certain features that are only exposed in go-ipfs) will be needed.

However, here is a simple goal that can give us a lot bang for a few buck, parallelize test runs. Right now, CI tests in CLI+HTTP+Core+Browser all in one worker for the different runtimes. This Matrix should be expanded because the CLI/HTTP/Core/Browser tests don't depend on each other, they can all be parallelized, reducing test time to the test suite that takes more time. Also, as you point out, the tests that take 180s are just tests to check daemon kill, these should even run in a separate container, so that users don't have to wait for those to get feedback from all the other tests.

Do we see value in the cli tests as they are written at the moment, and do we believe we want to keep moving forward with the general approach currently loosely laid out in the tests?

Not really, what I want is the sharness tests to be shared among implementations.

Yeah the sharness tests have been extracted from go-ipfs and put in this repo:

https://github.com/chriscool/ipfs-sharness-tests

The goal is to run those tests in this repo each time a commit is made in either the go-ipfs or the js-ipfs repo.

Few WIP branches for making tests in js-ipfs faster:

What's the state of:

  • Having a repo for sharness testing?
  • Running CLI, HTTP and Core tests in parallel in CI?

Per @chriscool, there is a repo on his account for the extracted sharness tests. We will need to move / fork the project into the IPFS organization though.

I started to work pretty extensively on the sharness tests and got everything working last week, but it looks like @chriscool also did quite a bit of work towards that at the same time.

@chriscool it would probably be good for us to sync up sometime this week if you have time.

Re parallel tests, being worked on here: ipfs-inactive/jenkins#93 (comment)

This issue is quite stale, our testing regime has altered significantly since it was opened. Please open an issue on ipfs/js-ipfs if further exploration is needed.