Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Background Tasks stuck other requests when i use a http middleware #4616

Closed
9 tasks done
w-Bro opened this issue Feb 24, 2022 · 5 comments
Closed
9 tasks done

Background Tasks stuck other requests when i use a http middleware #4616

w-Bro opened this issue Feb 24, 2022 · 5 comments
Labels
question Question or problem question-migrate

Comments

@w-Bro
Copy link

w-Bro commented Feb 24, 2022

First Check

  • I added a very descriptive title to this issue.
  • I used the GitHub search to find a similar issue and didn't find it.
  • I searched the FastAPI documentation, with the integrated search.
  • I already searched in Google "How to X in FastAPI" and didn't find any information.
  • I already read and followed all the tutorial in the docs and didn't find an answer.
  • I already checked if it is not related to FastAPI but to Pydantic.
  • I already checked if it is not related to FastAPI but to Swagger UI.
  • I already checked if it is not related to FastAPI but to ReDoc.

Commit to Help

  • I commit to help with one of those options 👆

Example Code

import time

from fastapi import FastAPI, BackgroundTasks, Request, Response
from loguru import logger

app = FastAPI()


@app.middleware("http")
async def logger_request(request: Request, call_next) -> Response:
    logger.debug(f"http middleware [{request.url}]")
    response = await call_next(request)
    return response


class Worker:
    def __init__(self, arg):
        self.arg = arg
        logger.debug(f"worker[{arg}] init")

    def work(self):
        for i in range(10):
            logger.debug(f"worker[{self.arg}] work[{i}]")
            time.sleep(1)


@app.get("/")
async def root():
    return {"message": "Hello World"}


@app.get("/work/{worker_id}")
async def work(back_task: BackgroundTasks, worker_id: int):
    worker = Worker(worker_id)
    back_task.add_task(worker.work)

    return {"message": f"work[{worker_id}] created"}


@app.get("/hello/{name}")
async def say_hello(name: str):
    return {"message": f"Hello {name}"}

Description

  • router "/work/{worker_id}" would create a task(a sync task, actually) on backgroud, after calling it, other new requests will stuck until the task finish
  • it just happen when i use a http middleware
  • i knew it is a starlette issue but i could not find a solution yet, anybody tell me how to solve this?

Operating System

Windows

Operating System Details

No response

FastAPI Version

0.74.1

Python Version

Python 3.7.10

Additional Context

No response

@w-Bro w-Bro added the question Question or problem label Feb 24, 2022
@rafsaf
Copy link

rafsaf commented Feb 24, 2022

You are blocking event loop by calling sleep which is synchronous. This is also true for every sync code. Your code runs in single thread in single process, async works well that when there are a lot IO operations (db calls, server2server requests), for that time event loop "awaits" and can do other job, but remember at the end of the day it is still one process and one thread, CPU bound task will block it.

Of course there are option to scale it, for example see some fresh reddit discussion, https://www.reddit.com/r/Python/comments/sxovwp/async_io_tasks_vs_threads/

If you are using uvicorn, type in uvicorn --help and read about workers

--workers INTEGER               Number of worker processes. Defaults to the
                                  $WEB_CONCURRENCY environment variable if
                                  available, or 1. Not valid with --reload.

Having more workers (processes) would give you power to have multiple CPU bound code running in parallel (max is number of processes).

Some other - maybe better setups are workers in very different host/processes/containers and some queue. You add job to be done to Redis for example and 'worker' instance collect and execute them one after another. This requires some more complex system so if your requirement is to have MAX of few background task at once, more processes should be enough for you

@w-Bro w-Bro closed this as completed Feb 25, 2022
@w-Bro w-Bro reopened this Feb 25, 2022
@w-Bro
Copy link
Author

w-Bro commented Feb 25, 2022

You are blocking event loop by calling sleep which is synchronous. This is also true for every sync code. Your code runs in single thread in single process, async works well that when there are a lot IO operations (db calls, server2server requests), for that time event loop "awaits" and can do other job, but remember at the end of the day it is still one process and one thread, CPU bound task will block it.

Of course there are option to scale it, for example see some fresh reddit discussion, https://www.reddit.com/r/Python/comments/sxovwp/async_io_tasks_vs_threads/

If you are using uvicorn, type in uvicorn --help and read about workers

--workers INTEGER               Number of worker processes. Defaults to the
                                  $WEB_CONCURRENCY environment variable if
                                  available, or 1. Not valid with --reload.

Having more workers (processes) would give you power to have multiple CPU bound code running in parallel (max is number of processes).

Some other - maybe better setups are workers in very different host/processes/containers and some queue. You add job to be done to Redis for example and 'worker' instance collect and execute them one after another. This requires some more complex system so if your requirement is to have MAX of few background task at once, more processes should be enough for you

it's on me that i forgot to say that i had already use multi workers, even 16 workers(16 cores of my AMD cpu), it still stuck.

@ryuujo1573
Copy link

Hi, I might be stuck with some similar situation, what's wrong with my code:

@app.middleware("http")
async def signature(request: Request, call_next):
    try:
        signature = request.headers.get('X-Coding-Signature')
    except AttributeError:
        return Response(status_code=status.HTTP_403_FORBIDDEN, content='Authentification failed.')
    content = await request.body()
    sha1 = hmac.new(bytes(SECRET_TOKEN, encoding="utf8"), content, 'sha1')
    sha1 = sha1.hexdigest()
    calculate_signature = 'sha1=' + sha1
    if not calculate_signature == signature:
        return Response(status_code=status.HTTP_403_FORBIDDEN, content='Authentification failed.')
    else:
        return await call_next(request)

and the route hook

@app.post('/hook')
def simple_hook(json: dict[str, Any]):

    try:
        id = json['sender']['id']
        name = json['sender']['name']
        # eventName = json['eventName']

        print("[coding.net] %s(%d): %s. " % (name, id, 'eventName'))

    except Error as e:
        print("ERROR: %s" % e)
        return {'code': -1}

    return {'code': 0,'message':'done!'}

Everything was just fine without the signature middleware above. Any hints?
Thank you for your work and your any reply

@Kludex
Copy link
Sponsor Collaborator

Kludex commented Mar 1, 2022

More about your issue: encode/starlette#1441

@ryuujo1573
Copy link

Subscribed, thank you.

Repository owner locked and limited conversation to collaborators Feb 28, 2023
@tiangolo tiangolo converted this issue into discussion #8603 Feb 28, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
question Question or problem question-migrate
Projects
None yet
Development

No branches or pull requests

5 participants