Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimization: resizable workers array #3137

Merged
merged 6 commits into from Jan 17, 2022
Merged

Conversation

elizarov
Copy link
Contributor

Instead of allocating an array of maxPoolSize (~2M) elements for the worst-case supported scenario that may never be reached in practice and takes considerable memory, allocate just an array of corePoolSize elements and grow it dynamically if needed to accommodate more workers.

The data-structure to make it happen must support lock-free reads for performance reasons, but it is simple, since workers array is modified exclusively under synchronization.

Instead of allocating an array of maxPoolSize (~2M) elements for the worst-case supported scenario that may never be reached in practice and takes considerable memory, allocate just an array of corePoolSize elements and grow it dynamically if needed to accommodate more workers.

The data-structure to make it happen must support lock-free reads for performance reasons, but it is simple, since workers array is modified exclusively under synchronization.
elizarov and others added 2 commits January 14, 2022 12:04
Co-authored-by: dkhalanskyjb <52952525+dkhalanskyjb@users.noreply.github.com>
@qwwdfsad qwwdfsad merged commit a1f5ab8 into develop Jan 17, 2022
@qwwdfsad qwwdfsad deleted the resizable-workers-array branch January 17, 2022 19:03
yorickhenning pushed a commit to yorickhenning/kotlinx.coroutines that referenced this pull request Jan 28, 2022
Instead of allocating an array of maxPoolSize (~2M) elements for the worst-case supported scenario that may never be reached in practice and takes considerable memory, allocate just an array of corePoolSize elements and grow it dynamically if needed to accommodate more workers.

The data structure to make it happen must support lock-free reads for performance reasons, but it is simple since the workers array is modified exclusively under synchronization.
elizarov referenced this pull request Feb 4, 2022
….IO unbounded for limited parallelism (#2918)

* Introduce CoroutineDispatcher.limitedParallelism for granular concurrency control

* Elastic Dispatchers.IO:

    * Extract Ktor-obsolete API to a separate file for backwards compatibility
    * Make Dispatchers.IO being a slice of unlimited blocking scheduler
    * Make Dispatchers.IO.limitParallelism take slices from the same internal scheduler

Fixes #2943
Fixes #2919
@slavonnet slavonnet mentioned this pull request Feb 27, 2022
dee-tree pushed a commit to dee-tree/kotlinx.coroutines that referenced this pull request Jul 21, 2022
Instead of allocating an array of maxPoolSize (~2M) elements for the worst-case supported scenario that may never be reached in practice and takes considerable memory, allocate just an array of corePoolSize elements and grow it dynamically if needed to accommodate more workers.

The data structure to make it happen must support lock-free reads for performance reasons, but it is simple since the workers array is modified exclusively under synchronization.
pablobaxter pushed a commit to pablobaxter/kotlinx.coroutines that referenced this pull request Sep 14, 2022
Instead of allocating an array of maxPoolSize (~2M) elements for the worst-case supported scenario that may never be reached in practice and takes considerable memory, allocate just an array of corePoolSize elements and grow it dynamically if needed to accommodate more workers.

The data structure to make it happen must support lock-free reads for performance reasons, but it is simple since the workers array is modified exclusively under synchronization.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants