Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dangling jobs once active and pending limits hit #99

Open
s-maj opened this issue Apr 8, 2019 · 1 comment
Open

Dangling jobs once active and pending limits hit #99

s-maj opened this issue Apr 8, 2019 · 1 comment

Comments

@s-maj
Copy link

s-maj commented Apr 8, 2019

Goal:

Drain gracefully jobs from scheduler if aiohttp is shutting down.

Repro:

  1. Execute:
import asyncio
import logging

import aiojobs
from aiohttp import web


async def coro(app):
    for i in range(0, 1000):
        job = await app["scheduler"].spawn(dummy())
        print(job)


async def dummy():
    await asyncio.sleep(5)


async def start_scheduler(app):
    scheduler = aiojobs.create_scheduler(
        limit=5, pending_limit=5
    )

    app["scheduler"] = await scheduler


async def stop_scheduler(app):
    for i in range(1, 100):
        print(f"Tasks active: {app['scheduler'].active_count}")
        print(f"Tasks pending: {app['scheduler'].pending_count}")
        if len(app["scheduler"]) > 0:
            await asyncio.sleep(1)
            for job in app['scheduler']:
                print(job, job.active, job.pending)
        else:
            break

    await app["scheduler"].close()


async def start_jobs(app):
    asyncio.create_task(coro(app))


async def init_app():
    app = web.Application()
    app.on_startup.append(start_scheduler)
    app.on_startup.append(start_jobs)
    app.on_shutdown.append(stop_scheduler)

    return app



def main():
    logging.basicConfig(level=logging.DEBUG,
                        format="[%(asctime)s] %(levelname)s %(message)s",
                        )

    app = init_app()
    web.run_app(app)


if __name__ == "__main__":
    main()
  1. Once 10 or more jobs are started hit ctrl-c
  2. Wait until app terminates

Expected result

No dangling tasks.

Actual result

One task is dangling.

@Dreamsorcerer
Copy link
Member

Dreamsorcerer commented Oct 15, 2022

It's unclear to me what output you're expecting (in my tests the application is killed before it finishes the shutdown), but I suspect the solution is the same as proposed in other issue, which is to introduce a wait_and_close() method to handle this cleanup situation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants