New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release Python GIL during WEBP encode #4433
Release Python GIL during WEBP encode #4433
Conversation
Is the codecov/project failure real? Can someone explain how to fix it? Is there anything I can do to improve the chances of this getting merged? |
Thanks for the PR! Codecov has been a bit flaky of late, you could merge/rebase from master to include #4444, but looks like the macOS coverage was uploaded fine from this PR. The Codecov links in the checks lead to HTTP 500 error pages, here's a working link: https://codecov.io/gh/python-pillow/Pillow/commit/0d5e800d7d4a01076371fca3dbff2e4851cc848b It still says "Processing...", but I see your changes are covered. And hopefully someone will review this before the next release in April. |
0d5e800
to
7df7cb2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A small suggestion to match the existing style.
Or how about using these macros?
Py_BEGIN_ALLOW_THREADS
... Do some blocking I/O operation ...
Py_END_ALLOW_THREADS
Which expands to:
PyThreadState *_save;
_save = PyEval_SaveThread();
... Do some blocking I/O operation ...
PyEval_RestoreThread(_save);
https://docs.python.org/3/c-api/init.html#releasing-the-gil-from-extension-code
Please could you share a small code snippet to demonstrate the performance difference? |
Or actually, |
Yes, I copied the exact use of the helpers (and the helpers themselves) from other code. Unfortunately, without copying those two small functions, the resulting The tester: import io
import time
from threading import Thread
from PIL import Image
full_path = 'Tests/images/fujifilm.mpo'
im = Image.open(full_path)
im.load()
def task(im, count, times):
t0 = time.time()
for i in range(count):
bio_output = io.BytesIO()
im.save(bio_output, format='WEBP')
t1 = time.time()
times.append(t1 - t0)
for thread_count in (1, 2, 4, 8):
times = []
threads = [Thread(target=task, args=(im, 1, times)) for i in range(thread_count)]
for th in threads:
th.start()
for th in threads:
th.join()
mean = sum(times) / len(times)
print("thread count: %d, avg time: %.2f, times: %s" % (thread_count, mean, times))
"""
output before fix:
thread count: 1, avg time: 1.39, times: [1.3878045082092285]
thread count: 2, avg time: 2.63, times: [2.6363396644592285, 2.6253771781921387]
thread count: 4, avg time: 4.42, times: [2.733854293823242, 4.041334390640259, 5.4517176151275635, 5.4615771770477295]
thread count: 8, avg time: 7.48, times: [1.430527925491333, 5.566206932067871, 5.586191415786743, 5.631235361099243, 9.46205759048462, 10.74857234954834, 10.731115579605103, 10.674172639846802]
output after fix:
thread count: 1, avg time: 1.50, times: [1.499457597732544]
thread count: 2, avg time: 1.38, times: [1.3706238269805908, 1.3869695663452148]
thread count: 4, avg time: 1.50, times: [1.4900705814361572, 1.5012726783752441, 1.4493658542633057, 1.5475354194641113]
thread count: 8, avg time: 2.30, times: [2.192249059677124, 2.197439670562744, 2.2884106636047363, 2.3436989784240723, 2.3442444801330566, 2.3294525146484375, 2.2780611515045166, 2.416266441345215]
""" |
Ah yes, the others are all inside the |
And I get somewhat similar results on a dual-core Mac:
|
Changes proposed in this pull request: release GIL in WEBP encoding.
I don't know if the patch is the right way to do it, but it works for me and it makes webp encoding in threads scale, so even if the patch is bad, please do the feature.