You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To best utilize the GPU, my inputs should be processed in batches (speedup ~20 hours to ~2 hours). So, I want to run the function on the inputs in batches and manually update the cache with the returned values, so that later processes that take them one by one would be pulling from the cache. - am requesting a public API for this.
The text was updated successfully, but these errors were encountered:
fromjoblibimportMemorymem=Memory('.cache')
f=mem.cache(expensive_operation)
ids=f._get_output_identifiers(*args, **kwargs)
# Check if in cachef._is_in_cache_and_valid(ids)
# Store in cachef.store_backend.dump_item(ids, f(*args, **kwargs), verbose=f._verbose)
To best utilize the GPU, my inputs should be processed in batches (speedup ~20 hours to ~2 hours). So, I want to run the function on the inputs in batches and manually update the cache with the returned values, so that later processes that take them one by one would be pulling from the cache. - am requesting a public API for this.
The text was updated successfully, but these errors were encountered: