Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed TransactionalMapProxy with Near Cache #13385

Closed
wants to merge 2 commits into from
Closed

Fixed TransactionalMapProxy with Near Cache #13385

wants to merge 2 commits into from

Conversation

Donnerbart
Copy link
Contributor

@Donnerbart Donnerbart commented Jun 29, 2018

The Near Cache key for a TransactionalMapProxy was not deserialized when it was in binary format, which happens when a Hazelcast client is used.

Another problem was that the Near Cache key was created first, even if it was not used at all. Especially with this fix, we might run into an unused deserialization, when no Near Cache was configured.

The order of read-only calls is:

  • read value from internal TX map (always binary key)
  • read value from Near Cache (binary/object key)
  • read value from remote (always binary key)

The Near Cache key creation has been moved behind the internal TX map lookup. This prevents the key creation if it's not used at all. It uses the original and binary key, to prevent a duplicated (de)serialization.

The order of write/remove calls is:

  • remove the value (always binary key)
  • invalidate the Near Cache (binary/object key)

The Near Cache key creation has been moved to the invalidation method, so behind all checks if the Near Cache is enabled. This prevents the key creation if it's not used at all.

Fixes #13371

This removes one indirection, which makes inlining easier for the JVM.
It also shortens the code a bit.
The Near Cache key for a TransactionalMapProxy was not deserialized when
it was in binary format, which happens when a Hazelcast client is used.

Another problem was that the Near Cache key was created first, even if
it was not used at all. Especially with this fix, we might run into an
unused deserialization, when no Near Cache was configured.

The order of read-only calls is:
* read value from internal TXN map (always binary key)
* read value from Near Cache (binary/object key)
* read value from remote (always binary key)

The Near Cache key creation has been moved behind the TXN map lookup.
This prevents the key creation if it's not used at all. It uses the
original and binary key, to prevent a duplicated (de)serialization.

The order of write/remove calls is:
* remove the value (always binary key)
* invalidate the Near Cache (binary/object key)

The Near Cache key creation has been moved to the invalidation method,
so behind all checks if the Near Cache is enabled. This prevents the
key creation if it's not used at all.
@Donnerbart Donnerbart requested a review from pveentjer July 3, 2018 10:00
context.beginTransaction();
try {
TransactionalMap<Object, Object> txnMap = context.getMap(mapName);
assertNull("Expected null for a non-existent key", txnMap.get("key"));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These calls are passed as Data key to the TransactionalMapProxy, which broke the Near Cache lookup before (key was not deserialized).

if (!nearCacheEnabled) {
return null;
}
return serializeKeys ? dataKey : ss.toObject(key);
Copy link
Contributor Author

@Donnerbart Donnerbart Jul 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ss.toObject(key) is the fix for the Data keys from Hazelcast clients. The deserialization is just executed when a Near Cache is configured (since the other txn lookups need the Data key).

@Donnerbart
Copy link
Contributor Author

Donnerbart commented Jul 12, 2018

I made some benchmarks to prove the fix. I've also created "bad fix", which is just adding toObject() to the toNearCacheKeyWithStrategy() method.

benchmarktxnmap-object-nonearcache
As expected there is almost no difference in this scenario. The key is already in OBJECT format, so we don't pay much for creating an OBJECT Near Cache key in the master branch, although it's not used. We clearly see the impact of the expensive Portable serialization, since we have to create a BINARY key for the txMap lookup.

benchmarktxnmap-object-withnearcache
Again not much difference when the Near Cache is enabled, when the key is already in OBJECT format. The master branch was quite fast with the PORTABLE_LONG_STRING key, which is a bit suspicious though.

benchmarktxnmap-binary-nonearcache
Here we see the impact of the bad fix, which will create an OBJECT key, which is not used. The proper fix is very comparable to the master branch, because we don't create the OBJECT key for the Near Cache in this scenario.

benchmarktxnmap-binary-withnearcache
This is the scenario which is broken ins master and fixed by this PR. So the numbers of the master branch are not 100% reliable, since there are exceptions being thrown. And of course the fix is not for free, since we have to deserialize the key for the Near Cache lookup. The good performance of the "bad fix" is again a bit suspicious, but this may also be a result of my benchmark.

The benchmark uses a txMap with and without Near Cache. I had to run with a single thread, since you cannot create multiple TransactionContext on the same map at the same time, and I didn't want to measure too much overhead of creating unique maps during the test. Also the test performs 4 operations on the TransactionContext, which might lead to the outliers we have seen in the results:

    @Benchmark
    @Threads(THREAD_COUNT)
    @Warmup(iterations = WARMUP_ITERATIONS)
    public void txnMapWithObjectKey(GlobalHZState state, ThreadState threadState, Blackhole blackhole) {
        int i = threadState.getRandomIndex(state.randomIndex);
        Object key = state.keys[i];
        int value = threadState.getNewValue();
        blackhole.consume(state.txnMap.get(key));
        blackhole.consume(state.txnMap.put(key, value));
        blackhole.consume(state.txnMap.get(key));
        blackhole.consume(state.txnMap.remove(key));
    }

The other test variations use state.dataKeys[i] and state.txnNearCacheMap to create the shown scenarios.

So the benchmark could definitely be improved, but I think it's good enough to prove that this fix is doing the right thing and doesn't add any unexpected latency.

@ahmetmircik
Copy link
Member

@Donnerbart as far as i see, here exists a fundamental problem of calling server-side near-cached proxy object during a client request. This seems not expected. Because near cache should be application specific but in this case, a remote client is using a server-side near-cache (this is not the case for non-tx imap for example). Maybe a proper fix should eliminate near-cache usage at all when proxy is used by a client request? WDYT?

@Donnerbart
Copy link
Contributor Author

Hmm, good point. I can close this PR in favor of a proper fix. Feel free to pick up any code change from this PR, if it's helpful.

@Donnerbart Donnerbart closed this Jul 31, 2018
@mmedenjak mmedenjak added the Source: Internal PR or issue was opened by an employee label Apr 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Source: Internal PR or issue was opened by an employee Type: Defect Type: Perf. Defect
Projects
None yet
Development

Successfully merging this pull request may close these issues.

TransactionalMapProxySupport fails for enabled NearCache
4 participants