Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Closing the transport connection timed out." caused by race condition #1975

Open
robin-mader-bis opened this issue Feb 22, 2024 · 2 comments
Labels
agent-python bug community Issues opened by the community

Comments

@robin-mader-bis
Copy link

Describe the bug: Occasionally, when using the elasticapm.Client (without a framework), during process shutdown (in the atexit handler), the transport thread will block forever while trying to send data to the APM server and subsequently be killed by the thread manager, after the configured timeout is reached. This causes "Closing the transport connection timed out." to be printed to the command line and the messages remaining in the buffer to be lost.

This seems to be caused by a race condition involving the atexit handler of the elasticapm.Client and the weakref.finalize of urllib3.connectionpool.HTTPConnectionPool (which uses an atexit handler under the hood) which calls _close_pool_connections. A timeline causing this bugs looks like this:

  1. The process is about to shutdown. atexit handlers are called.
  2. _close_pool_connections is called while all connections are in the pool. All existing connections are disposed.
  3. The elasticapm.Client atexit handler is called, sending the "close" event to the transport thread.
  4. The transport thread handles the "close" event, flushing the buffer and trying to send remaining data to the APM server.
  5. urlopen will block the transport thread forever while waiting to get a connection from the connection pool (since the poolmanager uses block=True and no pool timeout is configured, this will block forever, because the only way to get a connection from the pool this way, is for someone else to put a connection into the pool).
  6. The thread manager kills the thread after the configured timeout is reached, printing the error message and loosing all data in the buffer

The reason why this does occur consistently, is because _close_pool_connections will not clean up connections which are currently in use (e.g. connections being used in another thread). If a request is in progress when _close_pool_connections is called, the associated connection "survives" the cleanup and will be added back to the pool afterwards and can be reused by the transport thread (which may be a bug/unintended behavior of urllib3 since it claims HTTPConnectionPool is thread safe).

To Reproduce

The following minimal example reproduces the issue:

import time

import elasticapm

# NOTE: You should be able to remove the "config" argument in your environment
client = elasticapm.Client(
    service_name="<SERVICE>",
    server_url="<APM_SERVER_URL>",
    secret_token="<SECRET_TOKEN>",
    config={
        "SERVER_CA_CERT_FILE": "<INTERNAL_CA_FILE_PATH>",
        "GLOBAL_LABELS": {"Tenant": "<TENANT>"},
    },
)

client.capture_message("Test")

# Give the client time to resolve all internal network requests, ensuring
# that all urllib connections are in the pool when the atexit handlers are called
time.sleep(10)

As is the case with race conditions, you might have to fiddle with the sleep timing a little bit. 10 seconds work quite reliable in my environment, but you may need a few more/less seconds, depending on your environment.

Environment

  • OS: Windows 10
  • Python version: 3.11.7
  • package versions: urllib3==2.2.1
  • APM Server version: 8.11.3
  • Agent version: elastic-apm==6.20.0

Additional context

A workaround for my use case is to use a custom Transport class which uses a non-blocking pool. I don't know enough about the elastic-apm code base to know whether or not this causes issues in other parts of the package, but it seems to resolve the issue for me without causing any other major issues.

def get_import_string(cls) -> str:
    module = cls.__module__
    if module == "builtins":
        # avoid outputs like 'builtins.str'
        return cls.__qualname__
    return module + "." + cls.__qualname__

class NonBlockingTransport(Transport):
    def __init__(self, *args, **kwargs) -> None:
        super(NonBlockingTransport, self).__init__(*args, **kwargs)
        self._pool_kwargs["block"] = False

# Use like this:
client = elasticapm.Client(
    ...,
    config={
        ...,
        "TRANSPORT_CLASS": get_import_string(NonBlockingTransport),
    },
)
@github-actions github-actions bot added agent-python community Issues opened by the community triage Issues awaiting triage labels Feb 22, 2024
@basepi basepi added bug and removed triage Issues awaiting triage labels Feb 22, 2024
@basepi
Copy link
Contributor

basepi commented Feb 22, 2024

Thanks for the report! This isn't very high priority since it only happens on shutdown and emits an error. But we should definitely get this fixed.

@ngocmac
Copy link

ngocmac commented May 7, 2024

Hello,

I encountered the same problem when trying to monitor a Python script in APM Elastic.
Here is my code:

if __name__ == "__main__":

    client = Client(config = ES_APM_CONFIGURATION)
    elasticapm.instrument()
    client.capture_message("Start job do_something »)
    client.begin_transaction(transaction_type="script")
    do_something()
    client.end_transaction(name=__name__, result="success")

In ES_APM_CONFIGURATION, I have: SERVICE_NAME, SECRET_TOKEN, SERVER_URL, SERVICE_VERSION, ENABLED, ENVIRONMENT

I tried to add function get_import_string as suggested by @robin-mader-bis but I got an error
AttributeError: 'NonBlockingTransport' object has no attribute '_pool_kwargs'

I just received the message Start job do_something and nothing else. I don’t know how to resolve this problem.

Thanks,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agent-python bug community Issues opened by the community
Projects
None yet
Development

No branches or pull requests

3 participants