Skip to content

OLD API for Running and Reporting Tests

Johannes Link edited this page Nov 16, 2015 · 1 revision

Overview

Please note that there is considerable overlap between this page and Extending JUnit's Standard Behaviour.

Should support:

  • Starting all tests in a given project / folder / class
    • I think allowing third parties to provide test discovery strategies could be an interesting solution to this - @kcooney
    • I totally agree. The IDE / tools do know better which tests to run and where to find them. - @bechte
    • I don't agree. By now it works with JUnit pretty well, but in the past I have seen multiple instances where the build tool did not execute the same tests as the IDE due to differences in the discovery process. Since I at least would like a discovery process from arbitrary sources (class files, text files, urls ...), JUnit should provide the API for this. - @schauder
  • Starting a subset of tests given some filtering criteria
    • including subset of parameterized tests
    • Filtering should be very flexible, not only by grouping with @Category. I could think of things like an easy query-dsl. But we should not dive into details here, rather we should provide a mechanism that is flexible and can provide details of filtering later on. - @bechte
  • Watching the progress of running tests
    • For me, the framework should notify about all kind of lifecycle events. Clearly, we need an SPI that can be implemented to make this requirement fulfilled. - @bechte
  • Collecting the result of tests run with all necessary details (eg Stack trace, source position)
    • capture stdout/stderr per test - @ttddyy
      • given that a) tests can start threads and b) tests can run in parallel, doing this will be slow and inaccurate. Instead, we could provide a way for the code starting the test run to attach a listener and add additional data to the test run report, so third-party developers could add this - kcooney
      • Interesting idea. Probably, not all data is required for all tests. Maybe we should not explicitly include this functionality in the framework, but provide an additional library that will take care of it? Therefore, the framework should provide an extension point such that this requirement can be fulfilled by a third-party library. - @bechte
  • Navigation from test result to source (which might or might not be Java code)
    • A clear must-have. It should be possible to be directed to both, the test code and the production code - @bechte
    • Don't agree: What does it even mean to navigate for a library? How are we supposed to identify the production code? Heck in many cases I as a developer can't tell what is getting tested. While JUnit should provide some reference to the source of a test (the Java class + method signature or line number, a text file + specification name), finding the source code (for a Java class) or identifying the likely relevant production code (by stripping Test from the class name) should be left to the IDE. - @schauder
    • Maybe I should make my point a little bit clearer: I think it is very important to have such a feature, because having a failing test, one clearly wants to get to the code quickly. I don't think we will implement the navigation part in the core, but we need to be aware of this feature and providing all kind of information that is required for the IDE to actually perform the navigation. And this concern should be handled independently. So, maybe we can find a common way to identify a "location". That would be great. Otherwise we should be able to allow the IDE to plugin to the reporting and provide the required meta information for a test which it will be using later on for the navigation. @bechte (+1 @schauder)
  • Controlling test execution from a different process or JVM
  • Stopping test execution midway
  • The registration of more than one SPI (test execution engine)
    • I'm not quite sure about this one. What is actually meant with "test execution engine"? Are we talking about different "runners" or completely different tools (like cucumber). In the first place, I would like JUnit to focus on itself and running Java tests. We should keep in mind, that there might be some kind of test proxies, allowing other tools to hook into the process. - @bechte

Might Support

  • Running tests in parallel (where possible)
    • This sounds more important that some things on the "must support" list - kcooney (+1 @ttddyy)
    • configurable in many details (e.g.: ParallelComputer & ParallelComputerBuilder in surefire, not the one in org.junit.experimental) - @ttddyy
    • Running a test suite in parallel is really a booster. But we need to be aware of contexts and application state. Parallelism forces us to isolate each test even more. Introducing some kind of "application context" could introduce sideeffects. We should handle those in the framework. - @bechte
  • Choose test order so that the latest changes will be tested first (like in JUnit max)
    • I don't like the idea of ordering tests. There is no such thing like a natural order. Furthermore, I would like to framework to find the test dynamically during run-time, such that third-party providers might extend tests cases (like param/data providers). This clearly interferes with the idea of an order. - @bechte
    • I don't like the idea of ordering tests either, but many people have asked for this feature (see recent bugs and pull requests for randomizing execution order). We also already have support for sorting. I think we should allow extensions to randomize the execution order of a test, suite or test run, but provide no default implementation for choosing the random order. See my pull for Ordering for one approach. - @kcooney
  • Run test continuously (triggered by changes in production code or test code)
    • To me, this seems something an IDE should handle for us!? - @bechte
  • Connect test run data with other data like coverage information
  • Parameters for a test being determined during a test run (aka Data Providers)
  • New tests being dynamically added during the test run (allowing tests to "queue up" new tests etc) - @kcooney
    • I'm skeptical about this one. I haven't seen the need to do this so far and it makes it impossible to have setup run before and after an arbitrary group of tests defined by some tag, because at any time a test with that tag might get added. Therefore I think registration of tests should be limited to a phase before tests get executed. Although I think it should be possible to start executing tests in a different thread while still working on the registration. - @schauder
    • The main example of this I've seen is wanting to generate tests from rows in a data file or dynamically computed collection. Current requirements mean that either the data file has to be opened before any tests run (holding on to resources much longer than necessary), or the whole data file has to be treated as a single test. Jens, do you have an example of what "an arbitrary group of tests defined by some tag" means more specifically? I think there may be a way to get both here.
    • What I mean by 'tag's is a generalization of JUnit4s categories. Tags could be things like Needs-database uses-interface-y and it should be able to wrap some kind of modifiers similar to Rules around tests with specific tags. Including, but not limited to running only tests with/without a tag. I think a way to solve the conflict would be to make the - - like a pipeline, so that already registered tests can get executed, while still new tests are getting added. But code in the execution phase should not be able to add more tests. At least I'm still afraid that this would introduce problems later on. - @schauder
    • In the future there should be no need to build the whole test tree upfront. IMO this is a downside of the current JUnit implementation. The framework should be flexible, allowing tests execute several times and to have report more than one result. I agree, that it will be dangerous to have some kind of queue which can be manipulated. But if a test is executed, the test itself should have to possibility to execute several tests in the context it is in. - @bechte
  • since jenkins is the major CI, it would be nice to have report generator in standard package - @ttddyy
  • Allow multiple parameters(data providers) and consumer tests in single class. (e.g.: junit-dataprovider) - @ttddyy
    • I think we should design an extension mechanism that allows third-party libraries to design their own APIs for providing parameters (without defining a new runner). We can either provide our own implementation, or work with an existing open source project (like JUnit-params) to update their library to work without a custom runner (either via meta-annotations or an annotation like @MethodParameterizer(JUnitParams.class)) - @kcooney
    • I agree with @kcooney that we need to provide generic support for method argument resolution via a new extension model. Extensions should be able to register themselves for both instance-level dependency injection and method-level dependency injection (i.e., by resolving method arguments from their own custom source, whether it be from a data set for parameterized tests, from a Spring ApplicationContext, etc.). This functionality is analogous to Spring MVC's generic support for HandlerMethodArgumentResolver. - @sbrannen
    • Great idea (@kcooney / @sbrannen): I would rather have some kind of resolver magic than having a tight coupled mechanism that allows data injection. The resolver should be flexible enough so that it is not required to specify the test data in the test class. I'm thinking of some kind of configuration (like Spring configuration beans) - @bechte

Reporting / Listener API

There should be a separate API for informing interested parties (IDEs, CI-Server, Console ...) about what is going on inside JUnit.

  • start/end of JUnit
  • start/end of setup/teardown of a test class
  • start/end of setup/teardown of a test method
  • start/end of a test method (including testresult)
  • failed assert and maybe even successful assert - @jlink
  • failed assumption at the test class level
  • failed assumption at the test method level
  • skipped/ignored test class
  • skipped/ignored test method
  • custom info events that can be used by a TestSource / TestEngine to further structure a test. Examples could be something like JBehave which might use (Events like this are also generated by [Tumbler-Glass] (https://tumbler-glass.googlecode.com/hg/apidocs/tumbler/Tumbler.html); it would be really helpful to make event data available via standard reports, and integrated using tools such as annotations, lambdas and static injection. Easily-customizable reporting that can be integrated with CI /build tools is something that I think belongs on the "should support" list.) - mlschechter
    • info: Scenario
    • info: Given
    • info: When
    • info: Then
    • I could think of some kind of event system that allow multiple senders / receivers to communication without even knowing of each other. This allows both, reacting and logging of events as well as introducing new events by third-party libraries in their bits without touching the core code base. - @bechte

Test Result Format

  • The Exchange Format of the Test report (currently an XML Format) is also important.
  • As far as I know, there is no easy API to handle the files and no specification (DTD, XSD..) for this XML format. The files are consumed by a lot of tools (IDEs or CI Tools like Jenkins). Merging multiple report files or producing a dashboard is also an important feature. I know only an Ant Task to achieve this. - @jmini
  • The XML format was created by the Apache Ant team, and is produced by various build tools, not JUnit. There is an XSD specification. Most (if not all) IDEs use a custom format, because you cannot stream the XML report format during the test run, and IDE users want immediate feedback. We could provide a supported API for the XML format, but I think we should consider designing a streaming test result API (and possibility have a listener implementation that produced the Ant XML format from that API). I personally think we should leave building dashboards to other projects - @kcooney
  • I agree with @kcooney that there are essentially two separate issues here: 1) report generation (after completion of an entire test run) which could be XML, JSON, etc. and 2) real-time updates on the status of the test run (likely best served via a streaming API or events, though potentially implemented via polling of the API). - @sbrannen
  • +1 here from my side: I would also like to split these things up. The executor should define whether he would like to get updates through some kind of listener interface (streaming, events, etc.), or a complete list of results at the end of the run, or even both. I would like JUnit to provide the API but no concrete (or only a very simple one) implementation. - @bechte