New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possibilities to estimate the total test duration before starting the first test? #4696
Comments
Can you elaborate on what is your specific ask here? Do you want to see all attributes from a test class/method in the output stream? That is including your own custom attribute? Beside that, NUnit doesn't have any mechanism for such estimation OOB. |
Potential approach:
Upon starting a test run:
Maybe such approach is far from today's possibilities. Maybe most or even all is already there. Any input and thoughts are appreciated. |
Perhaps you can use the NUnitLite runner (because it is simple and direct) and just use the Explore command. That way you would get that loaded list of test assemblies and the tests they contain. You can modify the code there to dump the list you require. The only way to add a cross-assembly setupfixture that would work, would be to use either a custom runner, or NUnitLite. You could possibly use the NUnit.Console, but that adds another layer. I believe that modifying a runner would be easier than trying the Setupfixture road. |
I see, I may give that a try. Does the NUnitLite runner support categories? https://docs.nunit.org/articles/nunit/running-tests/NUnitLite-Options.html doesn't mention such option anymore, while the legacy wiki at https://github.com/nunit/docs/wiki/NUnitLite-Options/ec67ba644536af8c82c0c537d2f2d84bc08312b3 mentions |
Yes, it supports categories. Look at the Where clause. |
Based on the output of an experimental run with
...in #2124 supports the finding. However, @OsirisTerje you stated...
...which indicates this might have be available somehow. But I haven't figured out how. Any more thoughts are highly appreciated. |
Digging a bit further, #3202 contains some additional input but also indicates that neither the NUnit framework nor any of the runners (incl. NUnitLite) does support iterating over multiple assemblies before starting the first test. Further looking at the NUnitLite options I guess it could be an approach to add a I conclude there currently is no feasible way to achieve what I would like. Feel free on how to move ahead with this issue:
|
I assume you are aware that nunitlite makes the test assembly into a self executing test. At start, it calls into the nunitlite runner. So it us the test assembly that is calling the runner. It then doesn't make sense to call other assemblies, besides letting one test assembly be a parent and adding references to the others. The other option is to create a small shell script to call the different nunitlite enabled test assemblies and executing them. |
Exactly, as initially stated "my project spans across multiple assemblies" there indeed is a "main" test assembly which is at the top of the chain, similar as the application's "main" assembly directly or indirectly references all underlying assemblies. That's the reason behind the 3rd proposal to rename this issue to "obtain a list of all test assemblies, fixtures and tests before starting a test run". Whether the approach is based on the NUnitLite runner or a "global" (i.e. "process-wide" and "cross-assembly") SetUpFixture or something else doesn't matter to my use case. But as stated this morning, it's fully OK if NUnit decides to not intend to support this. |
If your structure is like that, you don't need any list if assemblies. It will execute all your tests, since you reference them. |
That's interesting, then either me doing something wrongly, or NUnitLite or NUnit doesn't behave as expected. Here's a repro project: Each of the 3 projects contains 1 test. The
|
Hi! Sorry about not being clear here. I moved your repro into the nunit.issues repo, here: https://github.com/nunit/nunit.issues/tree/main/Issue4696 The point is that you need to address the test fixtures. That is all that is required. The way it is done is to just inherit the test classes (aka test fixtures). This inherited classes don't need to do anything, so they could also be auto-generated. They will provide the necessary binding, and sort of "move" the tests into the correct scope. Code added is then: public class T1 : Issue4696.TestProject1.Tests
{}
public class T2 : Issue4696.TestProject2.Tests
{} |
I see, thanks for clarifying. Unfortunately this isn't really practical with ~150 test fixtures in ~15 test assemblies. Still, the pieces of information provided could help others having just a few assemblies and fixtures to achieve something similar. |
Thought somebody else already tried this, but found nothing.
I have a bunch of tests communicating with speed limited embedded systems, resulting in total test duration of several hours. In order to better handle this, I have marked the tests with a
DurationCategoryAttribute
that states the approximate duration of all long tests. For even better handle this, I have tried to reflect on these attributes to sum up an estimate of the total test duration, considering input from #3627.However, as my project spans across multiple assemblies, and NUnit loads one assembly after each other, it seems impossible to estimate the duration across all test assemblies before the first test gets started. #4106 doesn't seem to offer any alternative approach to get this information either.
I am aware of the limitations of estimating the total test duration, e.g.
Parallelizable
vs. sequential test execution. However, the available connected devices limit the level of parallelism. And filtering the tests for a given device will for sure result in sequential test execution, which would enable calculating the estimate for each set, rather than just running it each night and then see how long it took.Is there anything I have missed, an NUnit feature, an issue discussion or piece of documentation that could be used to achieve such estimation?
The text was updated successfully, but these errors were encountered: