Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with Logging to stdout in Laravel 10.21 using Monolog v3 EXCEPTION#Cannot log request: fwrite(): Write of 378 bytes failed with errno=32 Broken pipe #1860

Open
xuandung38 opened this issue Dec 8, 2023 · 32 comments
Labels

Comments

@xuandung38
Copy link

Monolog version 3

Hello,

I'm currently working with Laravel 10.21 and have encountered an issue when attempting to write logs to stdout using Monolog version 3. My configuration is as follows:

'stdout' => [
    'driver' => 'monolog',
    'handler' => StreamHandler::class,
    'formatter' => '\Monolog\Formatter\JsonFormatter',
    'with' => [
        'stream' => 'php://stdout',
    ],
],

However, this setup results in an exception:

Exception: /var/www/html/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php::162 -> EXCEPTION#Cannot log request: fwrite(): Write of 378 bytes failed with errno=32 Broken pipe

I noticed a similar issue previously reported as #1634, but it seems the problem persists. Any guidance or suggestions to resolve this would be greatly appreciated.

Thank you for your support.

Best regards,

@xuandung38 xuandung38 added the Bug label Dec 8, 2023
@msobkowiak-olx
Copy link

msobkowiak-olx commented Dec 11, 2023

It is because of this:

https://github.com/Seldaek/monolog/blob/main/src/Monolog/Handler/StreamHandler.php#L118

is_resource is not enough to handle writing to an already closed socket

[EDITED - the first solution was not the proper one]

The library should handle recreation of $this->stream on fwrite() error here:
https://github.com/Seldaek/monolog/blob/main/src/Monolog/Handler/StreamHandler.php#L149

As the only way to tell if the pipe is broken is basically when trying to write to it.

Also assuming fwrite will always succeed is a bit overoptimistic: https://github.com/Seldaek/monolog/blob/main/src/Monolog/Handler/StreamHandler.php#L160C24-L160C24

Sample pseudocode for this (as fwrite does not have to write everything in single call we could also check number of bytes written):

    protected function streamWrite($stream, LogRecord $record): bool
    {
        $result = fwrite($stream, (string) $record->formatted) === false
        if ($result === false)
            return false
       if ($result === length($record))
            return true;
       return false;
    }

Although locally the fwrite should be usually able to write all the bytes to the stdout (although I would not be doing the same on network sockets).

@ls-tyler-roach
Copy link

When can we anticipate a fix to this bug?

@Seldaek
Copy link
Owner

Seldaek commented Dec 18, 2023

See also comments on #1634 - I'm still waiting for a way to narrow this down.

@msobkowiak-olx
Copy link

@Seldaek thanks.

This is really the same class of problem like programming over the network (although the pipe is a local object, it can be closed by the OS at any time really) - closing the pipe on the "remote" side (so the side external to the library execution context) is not detectable until you try to write to the pipe. The explanation of this can be read here - there is not much we can do about it really as it is all coded in the kernel:

https://stackoverflow.com/questions/8369506/why-does-sigpipe-exist

@Seldaek
Copy link
Owner

Seldaek commented Dec 19, 2023

Yeah fair enough, maybe we need to check before writing or handle that failure at least and reopen the stream if possible. I am still curious why this happens now and it didn't before though.

@msobkowiak-olx
Copy link

msobkowiak-olx commented Dec 19, 2023

Well, possibly due to file descriptors leak, I think this is the obvious scenario that can happen locally.

@ls-tyler-roach
Copy link

Is it possible something with the server images changed, outside of any PHP changes, that could have caused this?

@msobkowiak-olx
Copy link

msobkowiak-olx commented Dec 21, 2023

Very unlikely unless PHP 8 implements is_resource or write() differently. Writing to an already closed socket will always behave like that.

Other option is lower sysctl limits for file handles (so the bug has always been there but had less chance of occuring).

#1862

There is also this interesting claim - which would explain the issue.

@Rydgel
Copy link

Rydgel commented Jan 24, 2024

We have the same issue in production, occured after upgrade to PHP 8.2 and Monolog 3

EDIT: so far it seems that file descriptors build up until reaching the limit of the kernel, preventing any new fwrite. Rebooting our servers fix the issue for a while. It seems like the stream isn't closed properly, or at least not in all cases. I've noticed that it should be done during the __destruct() phase of the handler, but is there some edge cases where it isn't called? (php-fpm process crash or something else)

@ls-tyler-roach
Copy link

Any updates on if/when we may see a resolution to this issue?

@TheLevti
Copy link

TheLevti commented Mar 6, 2024

Hello! Encountering the same issue on many of my projects as I log to stdout with json formatter.

@smepti
Copy link

smepti commented Apr 5, 2024

+1 for the issue

@stevenjweirstreet
Copy link

+1 Also having this issue with Laravel on PHP 8.2 currently.

@Seldaek
Copy link
Owner

Seldaek commented Apr 12, 2024

Just to grasp this better, as it seems nobody feels like digging into it (+1s are nice buuut they don't solve problems..). Does it happen only in long running worker processes, or also when using fpm just processing web requests?

@Rydgel
Copy link

Rydgel commented Apr 12, 2024

For me, it only happened in fpm web requests. Our jobs never had this issue.

@Seldaek
Copy link
Owner

Seldaek commented Apr 15, 2024

Ok thanks, this is odd though..

[@Rydgel] so far it seems that file descriptors build up until reaching the limit of the kernel, preventing any new fwrite. Rebooting our servers fix the issue for a while. It seems like the stream isn't closed properly, or at least not in all cases. I've noticed that it should be done during the __destruct() phase of the handler, but is there some edge cases where it isn't called? (php-fpm process crash or something else)

The way PHP works, resources should be closed and file handles released at the end of a request, even if you never call fclose() explicitly. It just garbage collects everything, so if it started happening with PHP 8.2 I am wondering if it is some bug in PHP itself.

Per #1634 (comment) maybe PHP 8.1 is the cause?

I still don't really see any clear pattern here, except for everyone using Laravel.. So maybe it's something Laravel does?

Closing the handle every time seems to cause a 15-20% slowdown in log writes vs just opening it once, if you do log a lot. If you log only one line obviously it is the same, so I am not sure if this is acceptable or not. Perhaps if we ensure at least batch writes happen in one stream open the impact would be small enough.

@Seldaek
Copy link
Owner

Seldaek commented Apr 15, 2024

If you want to try the fix, you can require monolog/monolog dev-fclose2 as 2.99 (2.x) or dev-fclose3 as 3.99 (3.x). I'd be happy to get confirmation that this fixes the problem before merging as it is strictly speaking a performance-degrading patch, I don't want to do it if not worth it.

See #1882 (2.x) and #1883 (3.x) for the patches.

@msobkowiak-olx
Copy link

Hey @Seldaek my team experiences it on custom PHP application. I can perhaps confirm that the issue started with 8.x branch however I would not necessarily assume that the resources are always "garbage-collected" - we use PHP-FPM and the worker processes will live through some requests.

@msobkowiak-olx
Copy link

@Seldaek and perhaps others - can you try out this quick patch I composed ? Sadly I dont have time to test it.

#1884

@TheLevti
Copy link

TheLevti commented Apr 15, 2024

As already mentioned, only happens in fpm context, not for long running processes/cronjobs.

I will check if we can run the patch to see if it resolves the issue.

Going to try out the patch now.

@smepti
Copy link

smepti commented Apr 15, 2024

@Seldaek you write that "except for everyone using Laravel".. We don't use laravel. We have symfony project, php version 8.1.27, monolog version 2.9.1. And this issue happening in fpm requests.

@Seldaek
Copy link
Owner

Seldaek commented Apr 15, 2024

Ok then Laravel is excluded too. I really don't know what is causing it except for having too many parallel requests/workers doing stuff perhaps and reaching system file handle limits. Anyway hopefully my patch does help.

@smepti
Copy link

smepti commented Apr 15, 2024

Ok then Laravel is excluded too. I really don't know what is causing it except for having too many parallel requests/workers doing stuff perhaps and reaching system file handle limits. Anyway hopefully my patch does help.

Our application is running in docker. It seems that there is no limit on the number of files in docker (systemctl show docker | grep LimitNOFILE= output LimitNOFILE=infinity). Also pm.max_children setted to 15, so unrealistic that we can hit hard limit of open files. If only someone could make a demo stand with load testing on a clean application to try to reproduce this issue. And may be edit pm.max_requests to 1 to make sure that the case is in php-fpm.
If no one will take it, then I can do it, but later

@TheLevti
Copy link

composer require monolog/monolog:"dev-fclose3"

gives me, any quick idea how to resolve it?

  Problem 1
    - Root composer.json requires monolog/monolog dev-fclose3, found monolog/monolog[dev-fclose3] but these were not loaded, likely because it conflicts with another require.
  Problem 2
    - launchdarkly/launchdarkly-php is locked to version 4.3.0 and an update of this package was not requested.
    - launchdarkly/launchdarkly-php 4.3.0 requires monolog/monolog ^1.6|^2.0|^3.0 -> found monolog/monolog[dev-main, 1.6.0, ..., 1.x-dev, 2.0.0-beta1, ..., 2.x-dev, 3.0.0-RC1, ..., 3.x-dev (alias of dev-main)] but it conflicts with your root composer.json require (dev-fclose3).
  Problem 3
    - illuminate/log v10.48.8 requires monolog/monolog ^3.0 -> found monolog/monolog[dev-main, 3.0.0-RC1, ..., 3.x-dev (alias of dev-main)] but it conflicts with your root composer.json require (dev-fclose3).
    - laravel/lumen-framework v10.0.3 requires illuminate/log ^10.0 -> satisfiable by illuminate/log[v10.48.8].
    - laravel/lumen-framework is locked to version v10.0.3 and an update of this package was not requested.

I guess composer.json should have an entry in:

"extra": {
    "branch-alias": {
        "dev-main": "3.x-dev"
    }
},

@Seldaek
Copy link
Owner

Seldaek commented Apr 16, 2024

@TheLevti that's why I said to put "dev-fclose3 as 3.99" as require so that composer aliases it and it passes the ^3.0 requires other packages may have on it.

@TheLevti
Copy link

TheLevti commented Apr 17, 2024

Worked, running it now on our test system. This might take some time to validate. So I guess in a few days I will let you know if the issue still occurs.

Would be nice if someone else with the same issue can do the same.

@Seldaek
Copy link
Owner

Seldaek commented Apr 17, 2024

Great, thanks for trying, I'll merge and release in a couple weeks maybe once we have some real world feedback :)

@TheLevti
Copy link

TheLevti commented Apr 19, 2024

Instead of closing the file after each log, could we do that on shutdown? Like registering a shutdown function or so to close the resources. Or if its during a framework lifecycle, part of a request/response cleanup step.

@Seldaek
Copy link
Owner

Seldaek commented Apr 19, 2024

But that is done already in close(), yet it doesn't seem to work..

@TheLevti
Copy link

🤔

Ok,let's see where the patch leads us. We are going to deploy this change further into production next week and monitor it there. Unfortunately it's hard to reproduce reliably, that's why I think it would be important that more people deploy it in their application and report back their result.

@HaizeW
Copy link

HaizeW commented Apr 30, 2024

Hello
We've been running into the same problem in our testing and develop environments
We use Monolog 2.9.2 on Magento 2.4.6 and PHP 8.1
We encountered the problem when trying to write the logs to both stdout and stderr
Unfortunately the fclose2 patch didn't help much
The only thing I found that fixed the problem is this:
file src/Monolog/Handler/StreamHandler.php

protected function streamWrite($stream, array $record): void
{
    try {
        fwrite($stream, (string) $record['formatted']);
    } catch(\Exception $e) {
        fdatasync($stream);
        fwrite($stream, (string) $record['formatted']);
    }
}

For some reason executing fdatasync before retrying fwrite fixed the problem
Note that fsync also did the job
I don't really know why this fixed the problem or if this is the cleanest way to do it but hopefully it will help you patch the module

@TheLevti
Copy link

If the patch does not help, then we should not go forward with it as its definitely a performance hit to constantly open/close the file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

9 participants