Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If a performance test fails in sanity or earlier no perflog entry is created #3186

Closed
vkarak opened this issue May 7, 2024 · 3 comments · Fixed by #3189
Closed

If a performance test fails in sanity or earlier no perflog entry is created #3186

vkarak opened this issue May 7, 2024 · 3 comments · Fixed by #3189

Comments

@vkarak
Copy link
Contributor

vkarak commented May 7, 2024

Performance logging happens after the test finishes since version 4.0 and if a test is a performance test. We need to understand the exact conditions when this happens and what triggers this behaviour.

This is also related to #2853.

@vkarak vkarak added this to the ReFrame 4.7 milestone May 7, 2024
@vkarak vkarak self-assigned this May 8, 2024
@vkarak
Copy link
Contributor Author

vkarak commented May 8, 2024

The problem is that although performance logging happens on test task finish (success or failure), the performance logger is set up only during the performance stage, thus if a test fails before it will log the performance to the null logger, which does nothing.

self._perflogger = logging.getperflogger(self.check)

@vkarak
Copy link
Contributor Author

vkarak commented May 8, 2024

Fixing this is a bit tricky. Moving the assignment of the performance logger in an earlier stage is not a solution, although it produces a log record. The problem here is that the check_perfvalues placeholder is empty, so nothing is being logged regarding performance values. This is not bad per se, but since the perflog handler does not know the actual performance variables, it creates a new log file, where it dumps the entry for the failed test instead of appending to the existing file (the header of the perflog file has changed). We could move the generation of check_perfvalues (the perfvalues attribute in the test) in an earlier stage but this is not sufficient either, as many times performance variables are set during the performance stage, which will not be executed at all if the test fails in a previous stage.

Maybe the best solution would be to continue the test in dry-run mode once it has failed a stage so that the performance stage gets executed up to the point of evaluating the performance variables.

@vkarak
Copy link
Contributor Author

vkarak commented May 13, 2024

I think this is not a bug, but rather a limitation of the current implementation that should be documented. Therefore I mark this issue as an "enhancement".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

1 participant