Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the usage of CacheIteratorHelper for service account #75510

Merged
merged 4 commits into from Jul 28, 2021

Conversation

ywangd
Copy link
Member

@ywangd ywangd commented Jul 20, 2021

CacheIteratorHelper requires lock acquisition for any mutation to the
underlying cache. This means it is incorrect to manipulate the cache
without invocation of CacheIteratorHelper#acquireUpdateLock. This is OK
for caches of simple values, but feels excessive for caches of
ListenableFuture.

This PR update the cache invalidation code to use Cache#forEach instead
of CacheIteratorHelper. It simplifies the code by removing any explicit
lockings. The tradeoff is that it needs to build a list of keys to
delete in memory. Overall it is a better tradeoff since no explicit
locking is required and better leverage of Cache's own methods.

CacheIteratorHelper requires lock acquisition for any mutation to the
underlying cache. This means it is incorrect to manipulate the cache
without invocation of CacheIteratorHelper#acquireUpdateLock. This is OK
for caches of simple values, but feels excessive for caches of
ListenableFuture.

This PR update the cache invalidation code to use cache.forEach instead
of CacheInvalidator. It simplifies the code by removing any explicit
lockings. The tradeoff is that it needs to build a list of keys to
delete in memory. Overall it is a better tradeoff since no explicit
locking is required and better leverage of Cache's own methods.
@ywangd ywangd requested a review from tvernum July 20, 2021 05:50
@elasticmachine elasticmachine added the Team:Security Meta label for security team label Jul 20, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-security (Team:Security)

@ywangd
Copy link
Member Author

ywangd commented Jul 20, 2021

I marked this PR as >non-issue because I'd like to get it into 7.14. A fix for the beta feature is a non-issue. But if it has to to be 7.14.1, maybe it should be labelled as (a small) >bug.

@ywangd ywangd added the auto-backport Automatically create backport pull requests when merged label Jul 28, 2021
@ywangd ywangd merged commit e2d98dc into elastic:master Jul 28, 2021
elasticsearchmachine pushed a commit to elasticsearchmachine/elasticsearch that referenced this pull request Jul 28, 2021
CacheIteratorHelper requires lock acquisition for any mutation to the
underlying cache. This means it is incorrect to manipulate the cache
without invocation of CacheIteratorHelper#acquireUpdateLock. This is OK
for caches of simple values, but feels excessive for caches of
ListenableFuture.

This PR update the cache invalidation code to use cache.forEach instead
of CacheInvalidator. It simplifies the code by removing any explicit
lockings. The tradeoff is that it needs to build a list of keys to
delete in memory. Overall it is a better tradeoff since no explicit
locking is required and better leverage of Cache's own methods.
elasticsearchmachine pushed a commit to elasticsearchmachine/elasticsearch that referenced this pull request Jul 28, 2021
CacheIteratorHelper requires lock acquisition for any mutation to the
underlying cache. This means it is incorrect to manipulate the cache
without invocation of CacheIteratorHelper#acquireUpdateLock. This is OK
for caches of simple values, but feels excessive for caches of
ListenableFuture.

This PR update the cache invalidation code to use cache.forEach instead
of CacheInvalidator. It simplifies the code by removing any explicit
lockings. The tradeoff is that it needs to build a list of keys to
delete in memory. Overall it is a better tradeoff since no explicit
locking is required and better leverage of Cache's own methods.
@elasticsearchmachine
Copy link
Collaborator

💚 Backport successful

Status Branch Result
7.14
7.x

elasticsearchmachine added a commit that referenced this pull request Jul 28, 2021
…5765)

CacheIteratorHelper requires lock acquisition for any mutation to the
underlying cache. This means it is incorrect to manipulate the cache
without invocation of CacheIteratorHelper#acquireUpdateLock. This is OK
for caches of simple values, but feels excessive for caches of
ListenableFuture.

This PR update the cache invalidation code to use cache.forEach instead
of CacheInvalidator. It simplifies the code by removing any explicit
lockings. The tradeoff is that it needs to build a list of keys to
delete in memory. Overall it is a better tradeoff since no explicit
locking is required and better leverage of Cache's own methods.

Co-authored-by: Yang Wang <yang.wang@elastic.co>
elasticsearchmachine added a commit that referenced this pull request Jul 28, 2021
…5766)

CacheIteratorHelper requires lock acquisition for any mutation to the
underlying cache. This means it is incorrect to manipulate the cache
without invocation of CacheIteratorHelper#acquireUpdateLock. This is OK
for caches of simple values, but feels excessive for caches of
ListenableFuture.

This PR update the cache invalidation code to use cache.forEach instead
of CacheInvalidator. It simplifies the code by removing any explicit
lockings. The tradeoff is that it needs to build a list of keys to
delete in memory. Overall it is a better tradeoff since no explicit
locking is required and better leverage of Cache's own methods.

Co-authored-by: Yang Wang <yang.wang@elastic.co>
ywangd added a commit to ywangd/elasticsearch that referenced this pull request Jul 30, 2021
CacheIteratorHelper requires lock acquisition for any mutation to the
underlying cache. This means it is incorrect to manipulate the cache
without invocation of CacheIteratorHelper#acquireUpdateLock. This is OK
for caches of simple values, but feels excessive for caches of
ListenableFuture.

This PR update the cache invalidation code to use cache.forEach instead
of CacheInvalidator. It simplifies the code by removing any explicit
lockings. The tradeoff is that it needs to build a list of keys to
delete in memory. Overall it is a better tradeoff since no explicit
locking is required and better leverage of Cache's own methods.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Automatically create backport pull requests when merged >non-issue :Security/Security Security issues without another label Team:Security Meta label for security team v7.14.0 v7.15.0 v8.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants