scala/scala-dev

Scala unit testing library proposal

lrytz opened this issue Β· 64 comments

lrytz commented

Authors: @dwijnand, @eed3si9n

This proposal seeks to work with the community, particularly the authors and maintainers of testing libraries, in order to introduce a basic, zero-dependency unit testing library to the Scala project. Such a library would live in the scala/scala repo and be a part of the Scala distribution, sharing the same Maven organisation id (org.scala-lang), version, release cadence, and bidirection binary compatibility as the rest of the Scala distribution.

This library would, therefore, be able to be used in projects that currently must fallback on using JUnit, such as the Scala project itself and the Scala.js project.

Additionally, we hope that by working with the community we may find a common path forward to overcome some of the fragmentation in testing styles and libraries that is currently present in the Scala community.

Background

When Scala started gaining popularity there was a heavy emphasis on its extensibility via DSL. As such, it attracted a cultural import from other language communities, including the notion of behavior-driven development (BDD). The two most used Scala test frameworks today, ScalaTest and specs2, were created under this trend creating English-like DSLs.

As another cultural import, the notion of property-based testing has been gaining traction over the years, with ScalaCheck as the front runner. Most projects that use ScalaCheck use it through ScalaTest or specs2, possibly via Discipline, with a small number of projects using ScalaCheck alone or ScalaCheck with Claimant. There are also scalaprops and Hedgehog which are alternatives to ScalaCheck, but not as popular.

In recent years, Scala has started to shed some of these older influences and begun to form its own culture, and there's been a swing-back movement towards minimalistic unit test frameworks that do not employ "should"-DSLs. This demand is partly due to our diverse need to cross-build across multiple Scala versions, as well as
multiple platforms (JVM, Scala.JS, and Scala Native). Some projects (particularly the Scala, Scala.js and Scala Native projects themselves) use JUnit for this purpose. uTest and Minitest, as well as the revival of Expecty for power asserts, are examples from this post-BDD, neo-unit-test era, however these aren't as popular.

Conform how Scala code is unit tested

One of the hopes of this proposal is that a standard unit testing library would conform how Scala code is tested, both in documentation (on the official docs, books, blogs/articles/forums/websites) and in projects.

Where books (or other forms of eduction) teaching Scala previously had to either (a) choose which testing library to document and/or use, (b) document and/or use multiple options, or maybe (c) chosen to avoid the topic, with this proposal they could choose to document and/or use the official unit testing library, and perhaps just mention the other options in the ecosystem.

Also, projects that previously had limited choices (e.g. only JUnit) would be able to use the same library the rest of the ecosystem uses.

Provide a unit testing library to zero dependency libraries and for every Scala version

When bootstrapping the Scala ecosystem there is a problem of inter-dependencies between core Scala projects and testing libraries, with ScalaCheck requiring Scala.js and Scala Native, and ScalaTest and specs2 requring ScalaCheck. This puts a few projects into the critical path (the ecosystem depends on the release of Scala.js, Scala Native, and ScalaCheck before it can get its first, non-JUnit, unit testing library) and in awkward positions (for example scala-xml cannot test itself with ScalaTest because its a dependency of ScalaTest). So to avoid cyclical dependencies when bootstrapping some projects choose to use JUnit.

But the use of JUnit for unit testing isn't free, as both Scala.js and Scala Native had to provide special support for it in order for it to actually work, a support that will need maintaince as JUnit continues to evolve.

Some libraries, often to avoid problems when bootstrapping, choose to (or must) have zero dependencies, and that means that they either choose to use JUnit or they create their own testing library, such as the Scala, Scala.js and Scala Native projects using JUnit, which isn't what most of the rest of the ecosystem uses.

The inter-dependency between Scala projects also means that some may consider every non-release, published version of Scala for upcoming major versions to be effectively unusable as there are no non-JUnit unit testing libraries available for it. Specifically this includes every artifact publish from branch builds and pull
request validation. This is why the Scala, Scala.js, and Scala Native (and Dotty) projects use JUnit (and partest) for their testing needs.

Remove any remaining blockers for not testing Scala code

The availability of such a library would also remove any remaining blockers a user might have for not testing their Scala code. These include:

  • a greater reduction in concern about the continuing availability of the testing library in new major versions of Scala
  • build tools (such as sbt) would be preconfigured to use this library for unit testing
  • the Scala project seeds, templates and guides (as well as the seeds, templates and guides of other projects) would more likely contain a unit testing example

Design Decisions

Here are a few design decisions that I think should discussed and actioned:

  • should test definition execute in the constructor or in a method?
  • should the library require extending a class or trait? use an annotation? manually register? Scala.js </3 annotations and wants extends
  • what is the shared glossary of terms? ("test", "example", "test suite", etc)
  • matchers or no matchers? how many matchers?
  • 1 style, or multiple styles?
  • tests as syntax or with values?
  • test results with exceptions or with values?
  • artifactId? scala-testkit (kind of taken now), testkit, scala-testlib, scala-unit-test?
  • package name? scala.testlib?
  • no property-based testing? (let's fix unit testing first)
  • support in build tools, such as current versions of sbt
  • target Scala: 2.14? 3.0? 2.13.1? 3.1?
    • create a backport library for past versions of Scala (2.12, maybe 2.11), like threetenbp?
  • stable in 2.14? I think so, with it published pre-2.14 as non-stable (0.x) for experimentation
  • let's not repeat history: i.e. scala.testing.UnitTest and scala.testing.SUnit, a short history:
  • (stretch) support in-file unit tests, using the Scala pre-processor? E.g. Rust's #[cfg(test)]
  • Future support? Scala.js needs it to support asynchronous tests, i.e., tests returning a Future that will decide whether or not they succeeded

Participants

I propose we involve the creators and maintainers of existing testing libraries in the community, so we can get buy-in and consensus on this proposal and the design decisions in them. So a working group.

Lead: Eugene Yokota (Scala Team, sbt maintainer, and maintainer of the revived Expecty)

Here are the people I think it would be lovely if they were able to participate:

I really like the idea of a standard testing framework for Scala.

Though I think such a tool must be really extensible and flexible.

My team has some specific needs which cannot be easily satisfied by any existing tool. Our flow is:

  1. First we need to collect all the tests in all the suites in a test scope
  2. Then we need to build testplans (essentially DAGs) considering test signatures
  3. Then we merge and rewrite testplans applying memoization rules
  4. Then we apply gargage collection eliminating all the unnecessary dependencies from the graphs.
  5. Then we start executing the plans

Most of the test frameworks do not allow to hook into discovery and initialization logic. For example, it's hardcoded in scalatest and utest, the methods are final and there are no extensibility points.

And in fact we are enforced to choose between implementing our own toolkit or rewriting a lot in an existing framework (most likely without a chance for the patch to be accepted).

Please consider this problem. My team will be happy to help to design and implement.

sjrd commented

If you want this to help Scala.js, an absolute requirement is support for asynchronous tests, i.e., tests returning a Future that will decide whether or not they succeeded.

Also it will be necessary that test classes extend a given class or trait. Annotations won't cut it.


How are you going to test the testing framework on JS and Native?

Since we're talking about minimal testing kits, I think it's fair to mention
test-state too

I am just going to chime here on the recent popularity of minimalist testing frameworks, I think they micro test frameworks work different with specific types of tests compared to the more fully fledged test frameworks (i.e. the BDD style ones).

Micro test frameworks seem to work best when you are testing libraries and compilers, you want minimal bootstrap and most of the time you are just testing assertions. On the other handle when testing applications the microtesting frameworks tend to be sub-optimal. You basically end up re-implementing the functionality of matchers and all of this "bloat" which people complain about is stuff that is actually required (i.e. I need to match that in some specific circumstance a specific exception is thrown). Having diffs is another thing, i.e. when you are comparing one case class to another case class you only care about the values that are different and you don't want to manually figure this out.

I guess my personal stance on this is I have zero issue with creating a minimalist dependency free test framework to deal with the chicken and egg dependency problems we have with bootstrapping Scala as well as critical libraries however I would wary about making it a "defacto" testing framework. I think that these larger test frameworks have their merit, its just in different areas which aren't always visible to other people.

If you want this to help Scala.js, an absolute requirement is support for asynchronous tests, i.e., tests returning a Future that will decide whether or not they succeeded.

Not just Future, but any F[_, ...]. I think such a tool must not depend on a specific monad and should provide a way to integrate any monad with its specific runtime. Though it should be easy enough, for example a small trait is enough to support it in scalatest

Before we consider the minutiae of technical questions listed under Design Decisions, it's worth taking a step back and considering the broader picture.

Consider the 245 commits that have gone into minitest and 471 commits that have gone into uTest, the two smallest widely-used testing libraries in the ecosystem: is the core Scala team willing to put in similar amounts of effort (i.e. more than a summer!) to bring their library up to a similar level of quality? Or do we have reason to believe it will take less effort than it has taken in the past?

How much effort are we thinking about investing anyway? A week of full-time work? A month? A quarter? A year? Someone's ongoing job forever? Each of these constraints has a vastly different solution space!

Do we have any consensus on what testing library the core Scala folks like? Do we have numbers on what testing libraries are popular in the community, i.e. what everyone else people likes? It honestly doesn't make sense writing your own testing library unless you are a testing-library-connoisseur that has tried several libraries and decided none of them are satisfactory, and you really (really (really)) know what you want

Have we considered simply touching-up and upstreaming uTest or minitest? Or, if we would like to keep them independent and able to evolve, vendoring one of them the same way OpenJDK vendors org.ow2.asm? These libraries already support literally all the technical requirements listed above.

@lihaoyi-databricks it's an offtopic question, but would you accept a patch into uTest addressing rigidity of test initialization logic? I like uTest but as I said in the comment above it doesn't allow us to do what we need.

@pshirshov that is off topic and something that can be discussed elsewhere

@lihaoyi-databricks offtopic, but not completely. I would vote agains uTest in it's current state - it's not flexible enough and it may be an issue if it becomes a part of scala standard library and currently existing APIs will freeze for years.

The same applies to all the other test frameworks, unfortunately. They are too restrictive and too opinionated and it really kills productivity in some use cases.

lrytz commented

is the core Scala team willing to put in similar amounts of effort (i.e. more than a summer!) to bring their library up to a similar level of quality?

The proposal says "work with the community to introduce a basic unit testing library to the Scala project". Of course it will build on / profit from all the existing work.

Have we considered simply touching-up and upstreaming uTest or minitest?

That's indeed one of the possible outcomes!

Do we have any consensus on what testing library the core Scala folks like?

We really hope to find a common ground. There will always be special needs and therefore reasons to build/use other libraries. But in many cases, a solid standard will do.

The proposal says "work with the community to introduce a basic unit testing library to the Scala project". Of course it will build on / profit from all the existing work.

Haha I thought I was going to profit from all the existing work, and that's probably what alex thought he was doing too, and yet hundreds of commits later here we are still work-in-progress πŸ˜›

I'm not asking time/effort questions rhetorically just to be a downer, but to really try and bring attention to them: "time", "effort", "build vs buy" and other project-management-y things are the high order bit here. The answers to these questions quickly limit the scope of what is technically possible, and constraints like "our budget is 1/5 of a person on this for the next 3 months" or "our strategy is to provide a good testing experience for projects {X,Y,Z} before attempting to branch out" really helps focus a discussion that otherwise can easily get lost in the weeds of matchers and monads and DAGs.

This is sort of a public works, not unlike making a small public transportation around a city.
Initially there might be some time, attention, discussion, resource etc required of people who are involved, but we hope that it has a good return to Scala development, as well as library authors etc in the long run.

Once it's part of scala/scala repo, we are hoping that there will be many contributors who can chip in their time to keep it up to date, and improve it over time.

that otherwise can easily get lost in the weeds of matchers and monads and DAGs.

Don't say that "monads and DAGs" are not important.

As well our team would be happy invest time and money into working on a tool which will save us a lot in future, so the answer to these budget questions really depends on the outcome of the discussion.

For now the lack of a test framework which may support our flow with "monads, DAGs" and GC is one of the biggest productivity killers, so we would be happy to invest up to several months to finally close the issue :)

I'm not asking time/effort questions

It's not clear yet how much time/effort will be able to go into this, partially because we don't know who is happy and willing to work on this.

So let's take uTest. You maintain uTest and let's say the consensus is that uTest is what we want. How would you feel about upstreaming uTest into scala/scala and maintain it there (where you'd probably receive other maintenance help)? (The idea being that uTest users would migrate, and external uTest wouldn't evolve and therefore wouldn't also need maintenance.)

I don't have a horse in this race (aside from a personal longstanding preference for just "the way that specs works"), but it's important to understand that the "minimal" in "minimal test framework" is an immensely complicated thing, and it means very different things to different people. All of the questions to resolve in the OP are a good example of this, but there are so many more that weren't raised (such as exceptional-vs-value assertions, meta-initialization structure, extensibility and transparent abstraction mechanisms, etc).

Hell, just bikeshedding "must" vs "should" vs "assert" vs "===" is going to be a massive time sink.

Let's also remember that a relatively minimal test framework is not going to meet the needs of a massive chunk of the ecosystem. Anything in the Cats ecosystem is going to need something like Discipline, which is pretty dependent on ScalaCheck (or something equivalent), and its framework integration takes advantage of some of the "not very minimal" features in both ScalaTest and Specs2. A lot of commercial projects I've seen depend on complex initialization staging logic which is only available in richer frameworks. I think it's fair to say that OP's goal of standardizing testing in Scala is very much out of reach for any such minimal framework, even leaving aside the subjective preferences issues.

There's also the problem of solving the bootstrap, as @sjrd mentioned, and that problem doesn't just go away by upstreaming the library. I feel like this issue is solvable, but only with tooling similar to what bootstraps scalac itself. This in turn implies that upstreaming a testing library might be necessary, but it can't be part of scala-library itself, since it needs to be snapshotted and bootstrapped independently.

exceptional-vs-value assertions

(it's there as "test results with exceptions or with values?" :P)

I purposely descoped property-based testing (though it's there in the notes), because I felt like it was another large, opinionated debate that I didn't want to tackle immediately. But, of course, it's very important.

but it can't be part of scala-library itself

The proposal is absolutely NOT to add this to scala-library, but to add another library/jar. Like scala-reflect is a library/jar that's co-versioned and co-released with scala-library and compiler.

I think overall there are two main approaches we can take:

  1. Build something from scratch
  2. Inherit something, forking it and taking over maintenance/development of the fork

The limitation of (1.) is that whatever you build would by necessity be very rudimentary: usable, but perhaps without a lot of the fancier features people may be used to in uTest or Scalatest and others. This may be enough, depending on what the goals are. Basically JUnit, but scalafied, and free of needing to chase the JUnit upstream.

I don't think building something both sophisticated and novel is feasible: too much uncertainty, ambiguity, and it's doubtful we'd reach an acceptable level of generality/quality to be worthy of inclusion in the standard library. The solution space is too big and people's opinions and styles are too heterogenous.

(2.) gives you a solid base of a known quantity to build upon: something we already know works, and we know what people like or do not like about it. We'd be taking in a library with well-known limitations and well-known flaws, and accepting them

It also gives you a chance to fix/cleanup things, and maybe sand off the more idiosyncratic parts of the library to try and appeal to a broader consensus, while still maintaining the core essence that made the library popular in the first place.

There's then the question of whether development will continue externally or not. I don't have an answer to that.


From what I can see, the big choice is between novel-and-simple, or unoriginal-but-sophisticated. In either case, I don't think there's much room for exploratory work: IMO anything going into the standard library should be a known quantity, either due to simplicity or due to age. If we want experimentation, that can happen outside the std lib.

I don't have a strong opinion in which one I prefer, and both seem they could conceivably satisfy the stated goals, but maybe others have opinions on this framing

Everything what comes into stdlib defines workflows for thousands of people for years. I think it's a very good idea to take into consideration the state of modern scala ecosystem and current developer needs. Otherwise we may just incorporate JUnit - it's definitely simple and mature :)

I think it's a very good idea to take into consideration the state of modern scala ecosystem and current developer needs.

That's absolutely the intent.

I think this proposal might have gone off the rails from the very first comment. I think that anything that aims at being a general purpose, extensible, blah blah, testing framework for the entire ecosystem is doomed to failure and will waste a lot of people's time and energy.

A cross-platform, zero-dependency, minimal, testing framework for bootstrapping the toolchain and a small handful of foundational third-party libraries has a chance of success however. I don't think such a library should aspire to being attractive to anyone outside that very small set of people.

I'm strongly aligned with @milessabin on this issue.

Extensibility and flexibility are fine, but it should be extremely minimal and focus on what the core tools need, while providing as much value and flexibility as possible for extension and use by others, but not actually providing them with all the rich and varied features found in even the more minimal unit testing frameworks. If something this limited isn't actually useful, then we should observe that Scala already has a good variety of unit testing frameworks and the status quo should be seriously considered as perhaps the best alternative.

@milessabin

A cross-platform, zero-dependency, minimal, testing framework for bootstrapping the toolchain and a small handful of foundational third-party libraries has a chance of success however. I don't think such a library should aspire to being attractive to anyone outside that very small set of people.

I agree as well on the design direction. We should focus on what we can remove instead of what more feature we can add.

  • should the library require extending a class or trait? use an annotation? manually register?

The way tests are detected and run by the test framework has a high impact on the friendliness/usability of the library. I’m not only talking about the fact that Scala.js does not support annotations at runtime, but about how painful it can be for the developer to just ask the test runner to do what he wants. The problem often comes from the fact that the test framework has one way of running things (e.g., in the case of JUnit, any parameterless method annotated with @Test in a parameterless top-level class can be run by instantiating the class and calling the method), with a couple of ways to customize the default way (e.g., an @Before annotated parameterless method will be invoked before each test), but that may not accomodate every use case. Most of the times, this requires users to bend their tests to make them fit into the expected structure of the framework. This leads to awkward, repetitive and error prone code. Simply put, tests are not first class citizen.

The solution to this problem consits in removing the inversion of control. Instead of having the environment call the tests, let the user run the tests and report their execution to the environment. (We can still provide boilerplate-free helpers to handle the happy path, so that no syntactic overhead is added in such a case) If you do that, suddenly a lot of features of the test framework are not anymore needed (e.g., no need for an @Before thing if you can just choose to execute something before you execute your test).

The lihaoyi/utest library looks really good to me in terms of ergonomics and features, except that it does not address this problem. The semantics of this test suite deeply confuses me because it is not how variables work in normal programs. Similarly, the fact that it is not possible to define a collection of tests by iterating on a collection of values strikes me. I’m scared by this implicit TestPath parameter that comes from nowhere. Sorry, my goal is not to rant on uTest, which, in my opinion, is the best testing solution in the ecosystem, but rather to point out an import aspect of the way we run tests (or, should I say, β€œon the way tests are run”, because of the inversion of control).

The solution to this problem consits in removing the inversion of control. Instead of having the environment call the tests, let the user run the tests and report their execution to the environment.

Yes, precisely.

But the use of JUnit for unit testing isn't free, as both Scala.js and Scala Native had to provide special support for it in order for it to actually work, a support that will need maintenance as JUnit continues to evolve.

It would be nice if the compiler projects could get off of JUnit and use a simple Scala test framework instead, but I wonder how much work that would take compared to how much work it will take to keep using JUnit from Scala.js and Scala Native. To get to something actually useful might be more work than you are imagining. I certainly didn't realize how much work it would take until it was too late to go back!

If the focus of the project is to create a simple Scala test framework primarily for compiler projects and other upstream projects like scala-xml, that could be worth exploring. I wonder if one could make a Scala-friendly extension of JUnit for this purpose.

Also I'm curious about this background:

When Scala started gaining popularity there was a heavy emphasis on its extensibility via DSL. As such, it attracted a cultural import from other language communities, including the notion of behavior-driven development (BDD). The two most used Scala test frameworks today, ScalaTest and specs2, were created under this trend creating English-like DSLs.

As another cultural import, the notion of property-based testing has been gaining traction over the years, with ScalaCheck as the front runner. Most projects that use ScalaCheck use it through ScalaTest or specs2, possibly via Discipline, with a small number of projects using ScalaCheck alone or ScalaCheck with Claimant. There are also scalaprops and Hedgehog which are alternatives to ScalaCheck, but not as popular.

In recent years, Scala has started to shed some of these older influences and begun to form its own culture, and there's been a swing-back movement towards minimalistic unit test frameworks that do not employ "should"-DSLs. This demand is partly due to our diverse need to cross-build across multiple Scala versions, as well as
multiple platforms (JVM, Scala.JS, and Scala Native). Some projects (particularly the Scala, Scala.js and Scala Native projects themselves) use JUnit for this purpose. uTest and Minitest, as well as the revival of Expecty for power asserts, are examples from this post-BDD, neo-unit-test era, however these aren't as popular.

I'm curious where the conclusion that "there's been a swing-back movement towards minimalistic test frameworks that do not employ should-DSLs" comes from. My experience is that throughout our history some Scala users have prefered should/must DSL-like test code whereas others have preferred traditional test structures and assertions.

I'm curious where the conclusion that "there's been a swing-back movement towards minimalistic test frameworks that do not employ should-DSLs" comes from. My experience is that throughout our history some Scala users have prefered should/must DSL-like test code whereas others have preferred traditional test structures and assertions.

Subjectively, I'm very much in the "should/must" camp. To the point where I find it immensely awkward to write tests without them. I know it's psychological and I know it's mostly pointless, but at the same time I think that writing good tests is at least partially about comfort, so I've made no effort to coldly strip away my preferences.

I'm also not a particularly representative sample of the broader Scala community.

I'm curious where the conclusion that "there's been a swing-back movement towards minimalistic test frameworks that do not employ should-DSLs" comes from. My experience is that throughout our history some Scala users have prefered should/must DSL-like test code whereas others have preferred traditional test structures and assertions.

I wrote the background section. I tried to phrase thing as neutral as possible, but I'd admit it's totally subjective. From my perspective there were some period in Scala like 2.7 ~ 2.10 where ecosystem (language designers, library authors, etc) were exploring various power that the language provides. And around since Scala 2.10~ when people started moving away from symbolic methods and SIP-18 discussion were happening I'd say the swing back towards simplicity started to happen overall.

In the testing arena, uTest and Minitest were both announced in late 2017, so it took a quite a bit for the effect to reach. And I feel like I've been seeing upward trend of support towards minimalism among core users.

But to clarify, I am not claiming that it's diminishing the importance of BDD or property-based testing, but more like adding a third wave of testing libraries. I didn't mention there, but there's also a bunch of other things happening like linting, benchmark / perf tools, coverage, and integration testing related to this area.

In the testing arena, uTest and Minitest were both announced in late 2017,

Not sure where that number is coming from, but to clarify, uTest has been in heavy usage for more than 5 years now since the beginning of 2014. It's not exactly some new kid on the block

commit c81ec0b0f243485b6ad4a6a8fbd48aa74c9c7fc2
Author: Li Haoyi <haoyi@dropbox.com>
Date:   Wed Jan 29 23:01:21 2014 +0800

    squash

@lihaoyi-databricks

Not sure where that number is coming from, but to clarify, uTest has been in heavy usage for more than 5 years now since the beginning of 2014. It's not exactly some new kid on the block

I stand corrected. I went by http://www.lihaoyi.com/post/uTesttheEssentialTestFrameworkforScala.html, but I should've looked at the repo.

Subjectively, I'm very much in the "should/must" camp. To the point where I find it immensely awkward to write tests without them. I know it's psychological and I know it's mostly pointless, but at the same time I think that writing good tests is at least partially about comfort, so I've made no effort to coldly strip away my preferences.

I am in the same boat, although its less about the "should/must" but more about having a comprehensive set of matches that + diffing of expected vs actual results when a test fails.

Anyways my final tl;dr here is that I think the best course is for the scala team to simply build a testing library that suits their needs and if its useful for other people then they can use it. I don't think you are going to find "one style of testing" that everyone agrees with.

I haven't read the whole tl;dr yet, but I will say that I wrote a version of expecty when I was learning macros, and I can't believe that something like that isn't the Scala way today.

I had my own preferences for what the assert macro transforms, but the xunit recommendation has always been that you write your own framework, in a way that serves your codebase or community.

What problem do we want a test framework to solve, anyway? Personally, I usually just want the thing to run stuff--my own stuff, thank you--and tell me answers. So I wrote my own test framework in 96 lines of code: https://github.com/Ichoran/kse/blob/master/src/test/scala/Test_Kse.scala

(I used jUnit to aggregate tests only because I couldn't figure out an easier way to hook into sbt. jUnit is doing approximately nothing.)

My thing is reflection-based, but a slight change could make it based off a Vector of functions (and probably shorten it by ten lines). So for a very, very minimal framework, not much work needs to be done.

What else is absolutely essential? Is there a clear dividing line between this and all the features you get in the full-fledged test frameworks? Or is it just a series of additional features of high value, then higish-medium value, then medium, then medium-low, etc., with no real way to know when to stop.

If there are a very clear set of requirements it makes a lot more sense to proceed than if success is determined by some vague sense that "okay, we've done enough now, I guess".

My personal answers to the design questions would be:

  1. Tests execute in methods. Constructors are weird.
  2. No annotations, no reflection. No opinion on registration vs inheritance.
  3. Avoid jargon. "test" and "result", maybe.
  4. No matchers.
  5. One style.
  6. Tests as Function0[Result].
  7. Results are Try[Result], obtained via Try{ f() }. If you muck up the test, you get failure.
  8. artifactId--no opinion
  9. namespace--no opinion
  10. No property testing.
  11. Absolute minimum hooks needed to support build tools in the least fancy possible way (run a thing + print results + exit code reflects success/failure).

If you need a way to aggregate results, well, that's a common problem. That's not the test framework's job; get a nice validation library instead, or roll your own validation with what the language provides. If it's not good enough for tests, it's not good enough for other use either.

If you need a way to express condtions, well, that's a common problem too. It's also not the test framework's job; get a nice logical expression library instead, or roll your own logic with what the language provides.

If you need a way to run things concurrently, that too is a common problem. Use common solutions. The one caveat is that you might need a hook in the test framework to request (the right type of) concurrency.

If you need a way to select tests, consider whether it's enough to actually change the code. If your tests are a vector of functions, you can filter on them, for example.

The main pain-point that I see in doing this at the library level is that the compiler doesn't let you tell it that something should fail to compile, but sometimes that's a really important thing to test. If we get a compiler hook of some sort (macro, whatever) then I think everything else can be very minimal and leverage the strength of the Scala language and standard library and not need a ton of other stuff.

What problem do we want a test framework to solve, anyway?

I need to be able to build complex test plans after discovering test suites. Then report these plans back to my test runner. Then run. Testplans include resource allocation and deallocation and essential for correct sharing of heavyweight dependencies.

Tests execute in methods. Constructors are weird.

I need lot more than just methods. I need to extract test methods' signatures then provide correct instances taken from a sequence of inherited service locators.

You may find a scalatest example here (and more here). It works but unfortunately it's fatally flawed because I don't have single entry/exit points and can't control test initialization/dispatching/scheduling. So I have to use dirty hacks to memoize my shared instances but these hacks are very unreliable

@pshirshov - Right, but the target audience for this test suite is not, primarily, you; it's for core language/library developers. What problems need to be solved for that?

Test requirements can be almost arbitrarily complex because projects can be almost arbitrarily complex. It's unlikely that the maximally portable test framework used to test core tools would be the same one as used in the most elaborate scenarios.

Okay, in case you are discussing just something noone apart of core developers will see - I'll retire from this topic. Though in case usual users (those dirty guys who write business logic) are gonna be exposed for this - I'll continue messing around saying that such a framework should support monadic code and allow us to write tests polymorphic on monad they run in. And it must support custom discovery, initialization and scheduling.

I know @sjrd says he doesn't want annotations and JUnit in Scala.js requires a compiler plugin for this as I understand it.

There are a few upsides to JUnit.

  • JUnit has @Ignore which could be handled perhaps by extends Test or extends NoTest
  • The other thing about a JUnit clone is that if you port a library from Java like the one I did sconfig, there is a good chance it uses JUnit which minimizes the changes needed for testing.
  • Very little learning is needed for people moving from Java to Scala.

Scala Native uses its own home grown testing library but I'm not sure of the origin. I was considering porting JUnit from Scala.js to Scala Native but will probably hold off given this proposal.

But the use of JUnit for unit testing isn't free, as both Scala.js and Scala Native had to provide special support for it in order for it to actually work, a support that will need maintaince as JUnit continues to evolve.

What all is needed to support JUnit on Scala.js and Scala Native? @sjrd mentioned Scala.js needs async (Future-returning) tests and can't deal with annotations. Are these projects currently using JUnit? If so, is it JUnit 4 or 5?

The reason I ask is that it is a huge amount of work to create a test framework, so if the bootstrap problem could be addressed by building on top of JUnit, that would reduce the scope considerably.

I haven't read the whole tl;dr yet, but I will say that I wrote a version of expecty when I was learning macros, and I can't believe that something like that isn't the Scala way today.

I don't have actual data, which would be interesting, but ScalaTest has long offered 1) plain old assertions, 2) Expecty/Spock-style assertions, and 3) matchers, and my observation is most people either use plain-old assertions or matchers. Few people use the Expecty style assertions, which we call DiagrammedAssertions, though I suspect that might partly be due to them being buried in all the other features. They are used, certainly, but are seem definitely stuck in third place after plain-old assertions and matchers.

For a bootstrap test framework, I'd go with plain-old assertions. They are the simplest to write and maintain. I wouldn't bother with macros either, probably, just to keep it simpler.

@pshirshov - I just think that "the easiest existing solutions are still too much to support on every platform" and "this doesn't do everything I need even on the best platform" are two different discussions.

I'm not saying that nobody else should see the core test framework; ideally lots of people would. But they'd presumably have similar needs to the core lib developers. Your use cases sound reasonable given your situation and it would be great to have something supporting them. But they don't sound best-served by a "basic zero-dependency" solution. Custom discovery, initialization, and scheduling are not basic features, are they?

You can discover, initialize, and schedule your own stuff however you want with a basic solution that is sufficiently flexible; and you can to some extent monadify things that aren't monadic by adding generic implementations. But if the "basic" solution has all this stuff built in from the start, is it really basic any longer? Or is it a new best-in-class advanced testing solution?

If you're expressing a hope that the basic solution would admit this more elaborate and sophisticated layer on top, I share it: ideally it would, and we should explore whether there exists a design that would make that possible. However, if it's a central requirement for you, maybe you could say a few more words about why you think this effort is the right one to provide the features needed for your project as opposed to modifications of something else (or a separate new testing framework that works the way you'd like).

non commented

There are two competing impulses in this proposal. If we are able to decouple them I think we're more likely to have a productive conversation.

This library would, therefore, be able to be used in projects that currently must fallback on using JUnit, such as the Scala project itself and the Scala.js project.

I think the idea of including an incredibly tiny (nanotest? picotest?) testing library with scala/scala to replace JUnit is a good idea. I would support taking minitest or utest (depending on which seems closer to the requirements of scala.js, scala native, etc.) paring them down to the absolute bare minimum, and putting them somewhere like scala.picotest or similar.

For this proposal, I would specifically not want this library to try to grow into a "standard" testing library or to compete with its upstream parent. In fact, if its extreme simplicity makes it slightly clunky to use that would be fine with me (maybe even preferable).

Additionally, we hope that by working with the community we may find a common path forward to overcome some of the fragmentation in testing styles and libraries that is currently present in the Scala community.

This sounds like a much bigger and different proposal. To the degree that the community might want to standardize on a testing library/approach/etc. I would caution against putting it into the standard library.

If we are going to try to chart a future path for "standardizing" testing approaches then a lot of things that were listed as out-of-scope of this ticket could not easily be left out.

@Ichoran wrote:

  1. Tests execute in methods. Constructors are weird.

This is so we can write

  def testSomething: Unit = assert(...)

as opposed to

  test("something") { assert(...) }

?

The former has a nice property of being able to invoke a test example via REPL etc.
The question is though, how would you enumerate them in the test runner? For JVM, we can do this.getClass.getMethods.filter(_.getName.startsWith("test")). Is that possible with JS or ScalaNative?

There are two competing impulses in this proposal. If we are able to decouple them I think we're more likely to have a productive conversation.

Indeed, there are two distinct needs that this was hoping to fix both, but if that's not possible then maybe it should just fix one, or split into two.

I would caution against putting it into the standard library.

Nothing is going in the standard library, not even your scala.picotest idea. This proposal is for adding another library to the Scala project, so picotest would be scala-picotest v2.14.0.

Nothing is going in the standard library, not even your scala.picotest idea. This proposal is for adding another library to the Scala project

I would like it to be built as part of the scala/dotty distribution though, so that when l do a publishLocal of the compiler that gives me everything I need for the next layer up.

The question is though, how would you enumerate them in the test runner? For JVM, we can do this.getClass.getMethods.filter(_.getName.startsWith("test")). Is that possible with JS or ScalaNative?

RefSpec is the ScalaTest style where tests are methods, and it is only available on the JVM, I believe because the reflection we needed was not available in JS or Native. I can't remember the details anymore, but RefSpec is the only ScalaTest style that's not supported on JS and Native, because we couldn't support it. So probably you wouldn't want tests as methods for this.

By the way, at least in ScalaTest, tests register during construction via a side effect (in styles other than RefSpec), but don't execute until later when run is invoked on the instance. One way to avoid the registration side effect is to have a val tests = ... in there, but that would still initialize during construction. The downside of val tests is a bit of boilerplate and indentation. Another way is to pass what you'd initialize val tests with to a constructor. That needs to be done in parens not curly braces in Scala, and if you want to do something like beforeEach/afterEach, that would need to be in curly braces after the parens. The registration side effect lets you put everything between curly braces.

Instead of having the environment call the tests, let the user run the tests and report their execution to the environment.

In case you're not aware, this is how ScalaTest works. ScalaTest runs a Suite by invoking run on it, and the Suite takes care of deciding how to run. But Suite isn't even required because the Reporter, which is how you "report their execution to the environment," doesn't depend on Suite. This is how ScalaTest supports different built-in styles, but its lifecycle methods (run and several others) allow users to make further customizations when they need to.

maybe it should just fix one, or split into two.

πŸ’―

There are two competing impulses in this proposal. If we are able to decouple them I think we're more likely to have a productive conversation.

Indeed, there are two distinct needs that this was hoping to fix both, but if that's not possible then maybe it should just fix one, or split into two.

For this Summer of Usability campaign, let's focus on the first part and leave out any suggestion about "common path" or "standard unit testing library".

@Ichoran :

You can discover, initialize, and schedule your own stuff however you want with a basic solution that is sufficiently flexible; and you can to some extent monadify things that aren't monadic by adding generic implementations. But if the "basic" solution has all this stuff built in from the start, is it really basic any longer? Or is it a new best-in-class advanced testing solution?

Actually, a basic solution can provide a couple of extension points and become extensible. In case of uTest, for example, just removing final modifiers from several methods may make half of my requirements implementable.

If you're expressing a hope that the basic solution would admit this more elaborate and sophisticated layer on top, I share it: ideally it would, and we should explore whether there exists a design that would make that possible.

Yeah, I hope that such a basic solution may be designed the way it would be possible to alter discovery/init/scheduling and easily write polymorphic tests.

However, if it's a central requirement for you, maybe you could say a few more words about why you think this effort is the right one to provide the features needed for your project as opposed to modifications of something else (or a separate new testing framework that works the way you'd like).

It's lot easier to consider previous mistakes when designing something from scratch. I'm considering modifying uTest or specs2 (scalatest is too huge, and I'm choosing between frameworks supported by IDEA. We may write our own IDEA plugin - but again, we are too small to maintain too much tools). Though I don't really want to support a fork and I feel like it may be hard push a patch into upstream. At the same time whatever comes into standard library/published under scala umbrella will be inevitably supported by Jetbrains.

Instead of having the environment call the tests, let the user run the tests and report their execution to the environment.

In case you're not aware, this is how ScalaTest works. ScalaTest runs a Suite by invoking run on it, and the Suite takes care of deciding how to run.

There is that 400 lines-long method with funny name private[scalatest] def doRunRunRunDaDoRunRun. It does a lot before suite gets invoked and there are no extension points at all.

But Suite isn't even required because the Reporter, which is how you "report their execution to the environment," doesn't depend on Suite.

In my case rigid code of discovery and scheduling which is hardcoded in that method is a fatal flaw. I need global knowledge, but I can't do anything. doRunRunRunDaDoRunRun.

Hi all,

A few thoughts on this thread. It is still hard for me to find the best way to structure them so bear with me!

I see (like others) at least 2 issues we need to address:

  1. the "dependencies" problem where each new Scala release needs to trigger a progressive wave of libraries releases and this slows down the availability of the whole ecosystem
  2. the need for a "standard" testing library, referenced in blog posts, books and tools, which would help newcomers to get started and easily write their first tests without having to think too much about it

About issue n.1

I think it is ok to have many testing libraries as long as they are zero-dependencies. This is what I did with specs2-core (which was fatter in the past and depending on Scalaz). I generally wait until the dependencies for all the specs2 modules are available (like ScalaCheck or Shapeless) to publish all specs2 modules but I can release specs2-core much sooner in the future.

Other "foundational" libraries like scala, Scala.js, cats or scalaz would benefit from being zero-dependencies in terms of testing. For those libraries having a common, tiny, test library would be a good idea. This also means that such a library should support property-based testing because it is indispensable to check laws. I doubt that more features like a sophisticated execution model, before/after, evolved matchers,... are necessary. We need:

  • auto-discovery of tests from other Scala code
  • grouping of tests per class (no other nesting / tree / ...)
  • filtering of tests per class name / regular expression on test names
  • simple assertions showing how actual != expected
  • concurrent execution
  • Future assertions for Scala.js
  • data generation (for laws)
  • support for compilation errors
  • annotations for custom reporting (cf Hedgehog annotations)

(Damn, that's not so tiny)

One interesting design choice is to decide between pure assertions vs exceptions. Probably exceptions is the simplest mental model for the user but not necessarily for the implementer. I would be tempted to explore a monadic design like in hedgehog. On that note I want to report on my current experience of using hedgehog in Haskell-land. It is quite good, there is a minimal support for assertions but I find that being able to use generators and properties is much more important to me. The lack of DSL for assertions is compensated by different ways to annotate the tests and a reporting where the test code is inlined with the generated values and results with diffs.

We can also consider something entirely different! Instead of creating a common test library with support for assertions, data generation, etc... we can take some inspiration from tasty. While I find this library a bit hard to explore because of its terminology (ingredients, providers, runners), the main idea is interesting. It is only an infrastructure to structure and run tests. It does not contain any assertions, only a IsTest typeclass executing something. In a way it is the testing support that sbt always wanted to implement. Based on this infrastructure it is possible to plug many different ways to declare tests, express assertions, provide detailed reporting. So maybe this is the thing that should go into a standard library + basic assertions and reporters to get started. Then cats and scalaz could add their own laws checking on top. tasty is also a bit more complex since there is some support for:

  • managing dependencies between tests
  • managing resources

which I don't think we need for foundational libraries but this is crucial for application testing. If those 2 aspects could be left out and delegated to plugins/extensions that would be great (actually I don't know that much about testing Scala itself, maybe dealing with resources is quite important there too).

Coming back to the "dependencies problem" it feels weird to build a whole testing library to be faster at releasing just the first layer of the ecosystem. Many other libraries depend on that first layer, and applications depend on them. So maybe we need to get better at releasing cascading dependencies in general, optimising the first layer at the expense of having to maintain a new testing library might not be worth it.

About issue n.2

Now let's say there is a testing library for the foundational libraries, including scala. Since it needs to be properties-based I would find it a fantastic opportunity to step-up our game as a community (nowadays even when I just run a property once, I call it a "test" and I love being able to generate values for it). Can it become the "Standard Scala Testing Library"? I don't think so because more features (before/after support for example) are necessary, people like many "styles" of testing, some dislike properties and experience proves that we can't please everyone. However it could be enough to get started with Scala and solve those 3 points:

  1. a greater reduction in concern about the continuing availability of the testing library in new major versions of Scala
  2. build tools (such as sbt) would be preconfigured to use this library for unit testing
  3. the Scala project seeds, templates and guides (as well as the seeds, templates and guides of other projects) would more likely contain a unit testing example

Then new users can decide to switch to other libraries once they know Scala and the ecosystem well enough. We would need to make it clear that this is just a starting point and resist the temptation to add features (maybe call it test-minimal, test-foundations or something showing that it is not supposed to grow)

In conclusion

I am not opposed to exploring:

  • an "as small as possible" testing library
  • aiming only at testing the foundational libraries
  • having a tasty-like simple data model for what it means to be a test suite
  • keep discovery/execution/reporting extensible and provide default instances for the foundational testing library adding support for concurrent execution, data generation, ...
  • the extensions points should mostly be there to accommodate differences between testing cats and Scala.js for example
  • use this library in templates, books etc...
  • keep other people, not the Scala team, free to extend it if they can, and switch to something entirely different they fancy more if they can't

In my case rigid code of discovery and scheduling which is hardcoded in that method is a fatal flaw. I need global knowledge, but I can't do anything. doRunRunRunDaDoRunRun. Perfect design.

It has been interesting how difficult it is to communicate what the design of Scalatest actually is. The Github codebase is not the interface of ScalaTest. We very carefully designed a public interface that provides extension points, mainly the lifecycle methods of trait Suite. They are:

  • run - override this method to define custom ways to run suites of tests.
  • runNestedSuites - override this method to define custom ways to run nested suites.
  • runTests - override this method to define custom ways to run a suite's tests.
  • runTest - override this method to define custom ways to run a single named test.
  • testNames - override this method to specify the Suite's test names in a custom way.
  • tags - override this method to specify the Suite's test tags in a custom way.
  • nestedSuites - override this method to specify the Suite's nested Suites in a custom way.
  • suiteId - a string ID for this Suite that is intended to be unique among all suites reported during a run.
  • suiteName - override this method to specify the Suite's name in a custom way.
  • testDataFor - provides a TestData instance for the passed test name, given the passed config map.
  • expectedTestCount - override this method to count this Suite's expected tests in a custom way.

You can see their signatures by looking at SuiteMixin. These lifecycle are public and designed for overriding by users. ScalaTest uses these itself to enable different testing styles and to alter how tests are run when you mix in ParallelTestExection or BeforeAndAfterEach, etc., and many other features---but you can override these methods too. This is the core of ScalaTest's design.

Moreover, as I mentioned earlier, ScalaTest does not force you to use Suite. In ScalaTest a "test" can be anything with a name that can be started and will later complete, a suite is a collection of one or more tests. That's it. That's why ScalaTest can serve as a runner for tests written in other tests frameworks. Suite is not required.

Now, it sounds like you (@pshirshov) want to do discovery in a custom manner. That's interesting. I have not heard that from a user before, and I'm curious what your use case is. You are correct that ScalaTest does not offer an extension point for discovery. I'd be very interested in hearing what your underlying need and goal is.

Lastly, the doRunRunRunDaDoRunRun method is intended to be private and will stay that way. I couldn't name it run because we had a public method called run already. This was to be the private run method. I started to call it doRun, but felt that was a silly name, so I kept going and it even more silly (calling it doRunRunRunDaDoRunRun) since it was private and I wasn't getting paid for this work! But it ended up being a stack trace joke that most people enjoy when they run across it:

https://twitter.com/search?q=dorunrunrundadorunrun%20scalatest&src=typd

Testing can a bit tedious, so I figured why not lighten it up a bit here and there?

To summarize, Scalatest is not primarily a BDD framework as it was portrayed earlier in this issue. It is primarily about extensibility. The intent is to address the reality that different people want and need different things. You can see that reality in just the short discussion here. The handful of people in this discussion want different things. The ScalaTest approach is not to try and shoehorn everyone into one way, but make it easy for different people and teams to mix together a few traits to get what they are after.

Actually I didn't realise I talk with the author.

So let me try to explain why I'm so frustrated.

Also I should say that public interface of ScalaTest is fine. And the assert macro is very convenient and useful (though I think it may be improved to provide better diagnostics for monadic code).

But ScalaTest is not flexible enough and private code is very complicated and hard to read/patch/maintain. The code we test has high variance and a lot of configurable aspects (for example we may wish to run our business logic against postgres repositories or in-memory repositories, real payment providers or mocks etc)

We have a big integration test suite. The following things are essential for us:

  1. Test execution time
  2. Setup and cleanup logic which should be triggered before/after all the test suites start/stop
  3. Because of high variance we are doing our best to avoid any kind of manual initialization of the code we test.

We have our own DI framework (distage) which plans the job first, then executes. It allows us to trace which components are required (reachable), we call it "garbage collection".

So, essentially our tests work the following way:

  1. A test is a method with an arbitrary signature. This signature is being injected by our framework. This is the first reason why we need custom discovery - we need to extract signature data and register our test as a white box where parameter types are known, not just () => Unit / (Fixutre) => Unit. Right now we have a wrapper for ScalaTest which converts our tests into something ScalaTest expects.
  2. We have a registry of introspectable components and our framework knows their dependencies and how to instantiate them. So, first we build a full graph describing relationships between all the components, then trace subgraph using our test signatures as garbage collection roots. Once we thrown out all the unreachable things from the graph we may execute it.
  3. Our planner support resources with brackets. So, initialization and cleanups are planned also. Once we have a plan we execute it be sure that all the operations happened and happened in correct order.

This is very convenient but this is not efficient then you have even hundreds, not thousands, of integration tests.

So, we decided to reuse some heavyweight/stateful components which may be easily shared, like thread pools, database drivers, etc. Now the process should look this way:

  1. First we collect all the individual tests
  2. We build a plan for each individual test (T_{i})
  3. We find all the components which marked as shared in each T_{i} and build the graph of shared dependencies S.
  4. S starts executing and common dependencies initialize
  5. When initalization of S finishes we take each of T_{i} and replace nodes already existing in S with corresponding values which are already available, getting intermediate IT_{i}
  6. We apply GC to each of IT_{i} getting final testplans FT_{i}
  7. We execute all the FT_{i}. They may be scheduled for parallel or sequential execution.
  8. Finalizing part of S executes.

You may find an example of our application entrypoint which executes a very similar flow here and in the code around. Also you may find our test logic here and around.

Maybe I'm stupid but I don't see any way to fit this flow into ScalaTest or any other framework. No of the frameworks allows us to perform custom discovery and store tests as introspectable whiteboxes. Most of the frameworks do not provide us global knowledge about all the tests discovered, we may operate only at suite level. And we can't alter the flow the way we need.

Right now we are using ScalaTest but our implementation is flawed because:

  1. Our memoization logic is based on a global singleton
  2. Finalization happens in shutdown hook, shutdown hooks were deprecated in sbt
  3. When we use ScalaTests to run tests in parallel all the planning is happening concurrently so there are unavoidable issues because of (1). So we have to execute tests sequentially.

The other reason why we need custom discovery is that we don't always want to scan classpath. The delays may also be annoying and in many cases it may be easier to register all the suites manually.

Testing can a bit tedious, so I figured why not lighten it up a bit here and there?
It's funny first time though this joke gets bit annoying when it's being repeated dozens of times every day over years :) Also it's very frustrating when you try to find a workaround to fit your needs into ScalaTests's protocols and always fail because the code there is too rigid.

Scalatest is great and thank you for it. I'm using it for years (ten years, I guess?..) But there are a lot of things which may be significantly improved.

@pshirshov That's a very interesting use case. We should move this to scalatest-users, probably, but in short I would hope you could implement one subclass of Suite whose lifecycle methods do that. If there's a natural hierarchy to the FT_{i} tests you end up with, you could model them as nested suites. That's "suite" with a little s. You'd probably just have one class that extends Suite at the top, though that depends on the details. Alternatively, you could model them as one big flat Suite of tests with no nested suites.

Given your process:

  1. First we collect all the individual tests
  2. We build a plan for each individual test (T_{i})
  3. We find all the components which marked as shared in each T_{i} and build the graph of shared dependencies S.
  4. S starts executing and common dependencies initialize
  5. When initalization of S finishes we take each of T_{i} and replace nodes already existing in S with corresponding values which are already available, getting intermediate IT_{i}
  6. We apply GC to each of IT_{i} getting final testplans FT_{i}
  7. We execute all the FT_{i}. They may be scheduled for parallel or sequential execution.
  8. Finalizing part of S executes.

ScalaTest does prefer if a Suite can give an accurate count of how many tests are expected once it is constructed, which is a bit of an impedance mismatch for your use case I think. For you to do give an accurate test count, you'd need to do 1 to 6 before run is invoked. (Because the number of FT_{i} you end up with is what expectedTestCount should return.) That sounds like a lot to do in a constructor, so I might do that lazily the first time one of the lifecycle methods are invoked. In other words, if someone invokes expectedTestCount and you haven't yet initialized, at that time I'd do 1 through 6. Then you can return the total number of FT_{i} from expectedTestCount.

Then when run is invoked on your Suite, I'd do 7 and 8. By the way, I think 1 through 6 can be done sequentially this way, whereas the actually running of your tests could be done in parallel if you want. I suspect that would solve your memoization in a singleton problem (or better yet, maybe you can see a way to get rid of the singleton). If you can in addition to run, use testNames, runTest, runTests, tags, etc., you could get parallel test execution by mixing in ParallelTestExecution.

Anyway, I'm not sure I fully understand your use case, but we should take this to scalatest-users if you want to discuss further. I would like to. It is an interesting use case that sounds like the kind of thing that ScalaTest's extensibility points were intended to support.

Here are the some of the highlights I picked out related to the zero-dependency, minimalist line of discussion.

Target audience should be core library authors

@mdedetrich

I guess my personal stance on this is I have zero issue with creating a minimalist dependency free test framework to deal with the chicken and egg dependency problems we have with bootstrapping Scala as well as critical libraries however I would wary about making it a "defacto" testing framework. I think that these larger test frameworks have their merit, its just in different areas which aren't always visible to other people.

@milessabin

A cross-platform, zero-dependency, minimal, testing framework for bootstrapping the toolchain and a small handful of foundational third-party libraries has a chance of success however. I don't think such a library should aspire to being attractive to anyone outside that very small set of people.

use uTest or Minitest as a starting point

@lihaoyi-databricks

Have we considered simply touching-up and upstreaming uTest or minitest? Or, if we would like to keep them independent and able to evolve, vendoring one of them the same way OpenJDK vendors org.ow2.asm? These libraries already support literally all the technical requirements listed above.

@non

I think the idea of including an incredibly tiny (nanotest? picotest?) testing library with scala/scala to replace JUnit is a good idea. I would support taking minitest or utest (depending on which seems closer to the requirements of scala.js, scala native, etc.) paring them down to the absolute bare minimum, and putting them somewhere like scala.picotest or similar.

power assertions?

@som-snytt

I will say that I wrote a version of expecty when I was learning macros, and I can't believe that something like that isn't the Scala way today.

Scala.js

@sjrd

If you want this to help Scala.js, an absolute requirement is support for asynchronous tests, i.e., tests returning a Future that will decide whether or not they succeeded.

Also it will be necessary that test classes extend a given class or trait. Annotations won't cut it.

name

For the name, what do you think about resurrecting the name SUnit?

next step

I think we can take these as general direction, and maybe move to another forum to focus on technical details. (Maybe a new repo + GitHub issues?)
I'd also be open to having meetings, if willing contributors to this project want to discuss things semi-face-to-face.

For the name, what do you think about resurrecting the name SUnit?

I think we should stay as far away from the word unit as possible in case novice users encounter this. Unit is very unlike SUnit; one is the canonical content-free return type, while the other would be a testing framework. This can only provoke confusion.

I think we should remove the friendliness tag since this no longer has anything to do with new users.

On power assertions, I would think we'd want to avoid macros to keep it simple, just have plain old assertions, at least until only Scala 3 is supported. Until then you'll need to write it twice, once for Scala 2 and again for Scala 3.

What build tool or tools do all the compiler projects use when they want to run their tests? Do they use sbt, maven, something else?

@bvenners

On power assertions, I would think we'd want to avoid macros to keep it simple, just have plain old assertions, at least until only Scala 3 is supported. Until then you'll need to write it twice, once for Scala 2 and again for Scala 3.

Yea. This is something that can be omitted in v1.

What build tool or tools do all the compiler projects use when they want to run their tests? Do they use sbt, maven, something else?

As far as I know, the compiler and Scala modules all use sbt. So integration with sbt/test-interface is a must. So it's one-dependency, not zero-.

@eed3si9n Ok, good. sbt makes it simpler, actually, especially if that's the only build tool that this would need to work with.

sbt = supported build tool.

I believe testz follows every single requirement mentioned thus far. From what I remember it's smaller than minitest or uTest, zero dependency, and fully inverts control. You can copy the core of it if you don't want the scalaz association; you should have enough to get up and running in less than 300 lines.

I'm still around with my issues. Had a nice chat with @bvenners, though seems like I cannot fit my needs into scalatest properly. Seems like I would have to write my own tool anyway and in case you interested I may try to make a lightweight prototype which will work for my needs but will not depend on our workflows.

I think we should acknowledge that this initiative has stalled. the reasons for that include:

  • 2.14 was canceled, removing one of the primary motivations for making a new zero-dependency testing library / module
  • the long-term plan for addressing the rebuild-the-world problem is TASTy and although that hasn't happened yet it isn't so terribly far in the future
  • MUnit came along and has begun to de facto occupy a role in the community that scala-verify might have played

the Scala team is not currently motivated to really drive this forward, so we're closing this ticket in our own tracker