Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calling get() on a synchronous chain result triggers E_WOULDBLOCK #6072

Open
10 of 18 tasks
neuroid opened this issue May 4, 2020 · 5 comments · May be fixed by #8070
Open
10 of 18 tasks

Calling get() on a synchronous chain result triggers E_WOULDBLOCK #6072

neuroid opened this issue May 4, 2020 · 5 comments · May be fixed by #8070

Comments

@neuroid
Copy link

neuroid commented May 4, 2020

Checklist

  • I have verified that the issue exists against the master branch of Celery.
  • This has already been asked to the discussion group first.
  • I have read the relevant section in the
    contribution guide
    on reporting bugs.
  • I have checked the issues list
    for similar or identical bug reports.
  • I have checked the pull requests list
    for existing proposed fixes.
  • I have checked the commit log
    to find out if the bug was already fixed in the master branch.
  • I have included all related issues and possible duplicate issues
    in this issue (If there are none, check this box anyway).

Mandatory Debugging Information

  • I have included the output of celery -A proj report in the issue.
    (if you are not able to do this, then at least specify the Celery
    version affected).
  • I have verified that the issue exists against the master branch of Celery.
  • I have included the contents of pip freeze in the issue.
  • I have included all the versions of all the external dependencies required
    to reproduce this bug.

Optional Debugging Information

  • I have tried reproducing the issue on more than one Python version
    and/or implementation.
  • I have tried reproducing the issue on more than one message broker and/or
    result backend.
  • I have tried reproducing the issue on more than one version of the message
    broker and/or result backend.
  • I have tried reproducing the issue on more than one operating system.
  • I have tried reproducing the issue on more than one workers pool.
  • I have tried reproducing the issue with autoscaling, retries,
    ETA/Countdown & rate limits disabled.
  • I have tried reproducing the issue after downgrading
    and/or upgrading Celery and its dependencies.

Related Issues and Possible Duplicates

Related Issues

Possible Duplicates

Environment & Settings

Celery version:

celery report Output:

software -> celery:4.4.2 (cliffs) kombu:4.6.8 py:2.7.17
            billiard:3.6.3.0 py-amqp:2.5.2
platform -> system:Darwin arch:64bit
            kernel version:19.4.0 imp:CPython
loader   -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled

Steps to Reproduce

Required Dependencies

  • Minimal Python Version: N/A or Unknown
  • Minimal Celery Version: N/A or Unknown
  • Minimal Kombu Version: N/A or Unknown
  • Minimal Broker Version: N/A or Unknown
  • Minimal Result Backend Version: N/A or Unknown
  • Minimal OS and/or Kernel Version: N/A or Unknown
  • Minimal Broker Client Version: N/A or Unknown
  • Minimal Result Backend Client Version: N/A or Unknown

Python Packages

pip freeze Output:

amqp==2.5.2
billiard==3.6.3.0
celery==4.4.2
configparser==4.0.2
contextlib2==0.6.0.post1
importlib-metadata==1.6.0
kombu==4.6.8
pathlib2==2.3.5
pytz==2020.1
scandir==1.10.0
six==1.14.0
vine==1.3.0
zipp==1.2.0

Other Dependencies

N/A

Minimally Reproducible Test Case

@app.task
def task(s):
    signature(s).apply().get(disable_sync_subtasks=False)


@app.task
def task2():
    pass


@app.task
def task3():
    pass

# works
task.delay(task2.s())

# results in RuntimeError(E_WOULDBLOCK)
task.delay(task2.s() | task3.s())

Expected Behavior

Both scheduled tasks should succeed.

Actual Behavior

The second task raises in the following error:

[2020-05-04 16:42:29,038: ERROR/ForkPoolWorker-7] [-] Task foo.task[44640f0b-c9cc-45ba-b082-f100eec77ee0] raised unexpected: RuntimeError(u'Never call result.get() within a task!\nSee http://docs.celeryq.org/en/latest/userguide/tasks.html#task-synchronous-subtasks\n',)
Traceback (most recent call last):
  File "/env/lib/python2.7/site-packages/celery/app/trace.py", line 411, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/env/lib/python2.7/site-packages/celery/app/trace.py", line 680, in __protected_call__
    return self.run(*args, **kwargs)
  File "/foo/tasks/__init__.py", line 18, in task
    signature(s).apply().get(disable_sync_subtasks=False)
  File "/env/lib/python2.7/site-packages/celery/canvas.py", line 808, in apply
    last and (last.get(),), **dict(self.options, **options))
  File "/env/lib/python2.7/site-packages/celery/result.py", line 1027, in get
    assert_will_not_block()
  File "/env/lib/python2.7/site-packages/celery/result.py", line 43, in assert_will_not_block
    raise RuntimeError(E_WOULDBLOCK)
RuntimeError: Never call result.get() within a task!
See http://docs.celeryq.org/en/latest/userguide/tasks.html#task-synchronous-subtasks

As a side note, I'm not entirely sure why is it necessary to pass disable_sync_subtasks to EagerResult.get(). Since the code is executed in the same process there doesn't seem to be much potential for anything to block.

@neuroid
Copy link
Author

neuroid commented May 4, 2020

As a workaround I'm using with allow_join_result(): instead of disable_sync_subtasks.

@auvipy auvipy added this to the 4.4.x milestone May 5, 2020
@auvipy
Copy link
Member

auvipy commented Jan 27, 2021

As a workaround I'm using with allow_join_result(): instead of disable_sync_subtasks.

did you check https://stackoverflow.com/questions/33280456/calling-async-result-get-from-within-a-celery-task/39975099 ?

@auvipy
Copy link
Member

auvipy commented Jan 27, 2021

As a workaround I'm using with allow_join_result(): instead of disable_sync_subtasks.

did you check https://stackoverflow.com/questions/33280456/calling-async-result-get-from-within-a-celery-task/39975099 ?

#3498 (comment)

@YPCrumble
Copy link

@auvipy I'm coming up against this when calling get() on an EagerResult.

In particular, I'm doing something like this which throws the error:

eager_result = some_task.apply()
result = eager_result.get()

It seems odd to get the warning RuntimeError: Never call result.get() within a task!, because clearly here get() is being called after a task has been called using the synchronous apply() syntax. In particular, the docs state that an EagerResult is a "Result that we know has already been executed."

I'm also wondering, wouldn't it make sense to remove the two WARNING boxes that seem only to apply to AsyncResult about waiting for tasks within a task, if we know that an EagerResult is performed synchronously?

I'm adding this comment because I believe it's possible I'm completely mis-understanding how Celery is working, especially with respect to EagerResult if that's the case I would love to help update the docs to clarify.

Thank you for maintaining Celery!

@auvipy
Copy link
Member

auvipy commented Feb 16, 2023

if you can come with a PR, it would be easier for me to check and verify. can you do please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants