Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revisiting stderr flush in CLIRunner #2682

Open
rsyring opened this issue Feb 29, 2024 · 0 comments
Open

Revisiting stderr flush in CLIRunner #2682

rsyring opened this issue Feb 29, 2024 · 0 comments

Comments

@rsyring
Copy link

rsyring commented Feb 29, 2024

This bug is described in detail at #2647 and #2636. I've read through those issues and am not convinced the current behavior is accurate or beneficial.

From: #2636 (comment)

This isn't a bug. When using print in general, you have to flush to be sure all output is sent to stderr. That's one of the reasons Click provides echo instead.

From: #2647 (comment)

Another way to put it is: you need to explain why your suggested behavior is correct. It's certainly different than current, but that doesn't mean it's more correct.

What I know is that stdout is flushed, and stderr isn't. I don't know why that's the case, but I do know that this is the behavior I've encountered all over the place, and if I want stderr to show up at a specific time I need to flush manually. This is what click.echo does automatically. If what I know and click's test runner's current behavior is incorrect, then you need to show me sources that explain why. Otherwise, it would just mean we're hiding the actual behavior of the calls you're making.

I believe the current behavior of not flushing stderr is wrong because it results in semantics during testing that aren't present during a normal run of the command. Take:

import sys

import click
from click.testing import CliRunner


@click.command()
def cli():
    print("to stdout")
    print("to stderr", file=sys.stderr)


def test_cli():
    runner = CliRunner(mix_stderr=False)
    result = runner.invoke(cli, (), catch_exceptions=False)

    assert result.stdout.strip() == "to stdout"
    assert result.stderr.strip() == "to stderr"


if __name__ == "__main__":
    cli()

If you run this through a normal Python execution, you will always get "to stdout" and "to stderr" on the console. If there were times when "to stderr" were not printed to the console, it would be a bug in Python. If I don't care when the output comes through, only that it shows up by the time the process has exited, then I don't need to take any extra steps.

However, if I run the test, its possible the second assert fails even though the first assert will always pass. Additionally, I'd have to add a sys.stderr.flush() at the end of cli() but only to get the tests to pass. It would never be needed when running the program in a normal Python process. Furthermore, its basically impossible to add the flush only during testing because of where the flush() is needed in the current code.

From: #2647 (comment)

You're welcome to use print to stderr, you just have to remember to flush in that case. Stdout is flushed, stderr isn't. That's true regardless of if you're using click.

If we flush for you during tests, that's hiding the bug, not fixing it.

As I demonstrated in the code above, my program works just fine when ran normally. There is no bug. The tests only fail because invoke() is not matching the expected semantics of a normal execution of a click command in a Python process.

It would only be flushing at the end of an invoke, which matches what Python does at the end of program execution. Therefore, I'm not sure I grok the resistance to adding this flush. Its not a performance issue. Its not a correctness issue. Its not a backwards compatibility issue. There are three issues now raised because of the unexpected behavior and countless other people have likely ran into it that didn't comment or raise a new issue. So why not add it? What's the downside?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant