New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
position argument implementation #1000
Comments
I think the same or a similar problem exists for years already: #285 |
Indeed, that seems to be the same issue. However, I think it was misdiagnosed in #285 as the reason is not due to the parallelism (as my example, which is completely sequential, shows) but due to the implementation with the above mentioned newline. |
Hi! It would be nice if you could confirm (or deny) that #1054 fixes the issue |
I still have the same issue in tqdm 4.58.0. Currently, tqdm works incorrectly with concurrent multiple bars. My code example: from concurrent.futures import ThreadPoolExecutor, wait
from threading import Semaphore
import time
import random
from tqdm import tqdm
def worker(pos, sem):
t = random.random() * 0.05
with sem:
# for _ in tqdm(range(100), desc=f'pos {pos}', position=pos):
for _ in tqdm(range(100), desc=f'pos {pos}'):
time.sleep(t)
def main():
with ThreadPoolExecutor() as executor:
sem = Semaphore(3)
# sem = Semaphore(10)
futures = []
for pos in range(10):
future = executor.submit(worker, pos, sem)
futures.append(future)
wait(futures)
if __name__ == '__main__':
main() I have checked with semaphore and without, also with |
I have same issue with @espdev.
if tq = tqdm(file_list), if tq = tqdm(file_list, position=tqdm_position), if tq = tqdm(file_list, leave=False, position=tqdm_position), Is there really no perfect solution? |
@wkingnet, I'm not 100% sure, because the code snippet you shared doesn't contain imports, so this is only the guess. The problem is that Python (at least, by default) does share memory between Apparently (according to @espdev's report), the PR doesn't solve the problem completely, but, unfortunately, I don't have time now to investigate it |
sorry, I forget import. here it's:
I'm try use And for my program, it must use process, not thread. mutil-process can speed up program processing capacity and shorten the time used. Although multithreading seems to be effective, the total processing time has not been shortened in fact. But this is not a big problem, I can live with it. Thank you very much. |
I think that bars jump around is caused by the logic of Example:
Then suppose
But the real case is not. Real case:
Code to verify above assumption:
A possible solution is to change the
Another faster workaround is to close the bar in a reversed order so that it won't change the position of previous bar.
|
@kolayne Hi, please check if my above investigation and solutions are right. |
Thank you very much for your help and research. Because my programming level is only entry level, I need some more time to test your code. But unfortunately I don't have much time to finish this work now. And I think the cause of the error you pointed out is correct, so the repair code should work correctly. |
@BenjaminChoou Thanks for the Rlock() solution, it works perfectly on my multiprocess! |
@BenjaminChoou, I'm sincerely sorry for such a long delay.
This makes sense, and it is likely to be the case, although this is not explicitly coded, I think the problem is in the following line: Line 1307 in 140c948
When a bar that shouldn't be cleared gets closed, tqdm immediately prints a newline, so all the remaining bars move below.
Yes, it is. But the easier way, as far as I can see, is to slightly alter the way bars get closed (that's basically what is done in #1054). You can ensure the problem is fixed by running the snippet you've provided with the version of
Looks like you're right. But the only snippet I have, which doesn't currently work as I expect, has a bug when selecting positions for new bars, but works fine with old ones (unless multiprocessing is used, which I'm going to work on later). If you have any other scripts which don't work with it, please, post them in #1054! |
If your use-case is about the same as mine, this workaround might help you in the meantime: https://gist.github.com/NiklasBeierl/13096bfdd8b2084da8c1163dd06f91d3 Running:Finished: |
@NiklasBeierl, your workaround runs nicely on Linux. Thanks! |
I've found another potential solution. What I did was to simply wait for all threads/processes to finish before closing the progress bars. Seems to have fixed the issue, as the progress bar will reach 100% but will stay at wherever they were previously instead of jumping to the top. |
The initial reproducer creates consecutive progress bars, where the next starts when the preceding bar has been closed and cleaned up. This makes the position argument meaningless. The position argument only works if there are multiple progressbars active consecutively; position n is relative to active progress bar 0, and any closed progress bars are above progress bar 0. So if all you have is closed progress bars and you position something at, say, pos=2, then there will be two empty lines between the new progress bar and the ones already closed, until the new progress bar is itself exhausted, closed and the display is rearranged to move the closed bar to what was position 0 and the remaining active bars move one line down as needed (so position 0 is now on the line below the most recently closed bar). You can't avoid closing when using |
@haitao-git I have an updated demo. You can test it on Windows and Linux from multiprocessing.pool import Pool
from multiprocessing import Array, Manager
from tqdm import tqdm
from time import sleep
from random import randrange
from typing import *
# You could replace that with multiprocessing.cpu_count()
PROCESSES = 20
def get_pgb_pos(shared_list):
# Acquire lock and get a progress bar slot
for i in range(PROCESSES):
if shared_list[i] == 0:
shared_list[i] = 1
return i
def release_pgb_pos(shared_list, slot):
shared_list[slot] = 0
def do_work(args):
package_number, shared_list = args
pgb_pos = get_pgb_pos(shared_list)
try:
for _ in tqdm(
range(10),
total=10,
desc=f"Work package: {package_number}",
# +1 so we do not overwrite the overall progress
position=pgb_pos + 1,
leave=False,
):
sleep(randrange(1, 3) / 10)
finally:
release_pgb_pos(shared_list, pgb_pos)
result = package_number
return package_number, result
if __name__ == "__main__":
# This array is shared among all processes and allows them to keep track of which tqdm "positions" are
# occupied / free.
manager = Manager()
shared_list = manager.list([0] * PROCESSES)
lock = manager.Lock()
work_packages = [(i, shared_list) for i in range(100)]
results = [None] * len(work_packages)
with Pool(PROCESSES, initializer=tqdm.set_lock, initargs=(lock,)) as p:
for package_number, result in tqdm(
# I use imap_unordered, because it will yield any result immediately once its computed, instead of
# blocking until the results can be yielded in order, giving a more accurate progress
p.imap_unordered(do_work, work_packages),
total=len(work_packages),
position=0,
desc="Work packages completed",
leave=True,
):
results[package_number] = result Also, you can try a repo using this method. |
read the [known issues]
environment, where applicable:
I have been having trouble with specifying the position argument for tqdm for a block of 8 bars. Consider the following code:
This is the output:
As you can see the progress bars are not outputted correctly. I believe the issue is here
tqdm/tqdm/std.py
Lines 1293 to 1294 in 15c5c51
The problem is that when tqdm closes, the output is always in position 0 regardless of the value of
pos
. Another issue is that a new line\n
is outputted. This is problematic because it means that the position for the nexttqdm
is no longer correct since the cursor is no longer at the beginning of the block.Clearly this example is simplistic since the position argument is unnecessary. However it illustrates an unavoidable problem when using threading or multiprocessing.
One way to fix this would be to change
std.py:1293
toself.display(pos=pos)
and tohave a tqdm command to update the output at the end of a tqdm block so that an appropriate number of new lines is outputted.
The text was updated successfully, but these errors were encountered: