{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":97581720,"defaultBranch":"imr-hackaton","name":"scylla","ownerLogin":"denesb","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2017-07-18T09:46:31.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/1389273?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1717167834.0","currentOid":""},"activityList":{"items":[{"before":null,"after":"c8bce22b87c65d8980a1e5fb9e8430dcf32614c6","ref":"refs/heads/repair-compaction-tombstone-gc-conf","pushedAt":"2024-05-31T15:03:54.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"replica/table: maybe_compact_for_streaming(): toggle tombstone GC based on the control flag\n\nNow enable_compacting_data_for_streaming_and_repair is wired in all the\nway to maybe_compact_for_streaming(), so we can implement the toggling\nof tombstone GC based on it.","shortMessageHtmlLink":"replica/table: maybe_compact_for_streaming(): toggle tombstone GC bas…"}},{"before":"532c50624dbef2f4dacd40503ca584c1c007e473","after":"eeda65502b9126eb96ba5accc2f0f29b16d333a3","ref":"refs/heads/rcs-n-concurrency","pushedAt":"2024-05-31T13:24:33.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"reader_concurrency_semaphore: wire in the configurable cpu concurrency\n\nBefore this patch, the semaphore was hard-wired to stop admission, if\nthere is even a single permit, which is in the need_cpu state.\nTherefore, keeping the CPU concurrency at 1.\nThis patch makes use of the new cpu_concurrency parameter, which was\nwired in in the last patches, allowing for a configurable amount of\nconcurrent need_cpu permits. This is to address workloads where some\nsmall subset of reads are expected to be slow, and can hold up faster\nreads behind them in the semaphore queue.","shortMessageHtmlLink":"reader_concurrency_semaphore: wire in the configurable cpu concurrency"}},{"before":null,"after":"936e96af4cd97f32061c8b1db4a6623db219cc88","ref":"refs/heads/tools-java-upd-6.0","pushedAt":"2024-05-30T12:35:54.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"Update tools/java submodule\n\n* tools/java 4ee15fd9...6dfc187a (1):\n > Update Scylla Java driver to 3.11.5.3.\n\n[botond: regenerate frozen toolchain]","shortMessageHtmlLink":"Update tools/java submodule"}},{"before":"85f333c99a69cd5eb767b3073506852b68878de6","after":"d71da944fadda84cdf7610ee65dae021f176038e","ref":"refs/heads/compaction-cell-stats","pushedAt":"2024-05-30T11:56:57.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"querier: consume_page(): add rate-limiting to tombstone warnings\n\nThese warnings can be logged once per query, which could result in\nfilling the logs with thousands of log lines.\nRate-limit to once per 10sec.","shortMessageHtmlLink":"querier: consume_page(): add rate-limiting to tombstone warnings"}},{"before":"b15c0a32834b4559fa1103b3f54305d012421156","after":"aee6a9e24a25a8e70109ae92a9942018f195b635","ref":"refs/heads/update-reclaim-threshold-default","pushedAt":"2024-05-30T07:34:51.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"db/config.cc: increment components_memory_reclaim_threshold config default\n\nIncremented the components_memory_reclaim_threshold config's default\nvalue to 0.2 as the previous value was too strict and caused unnecessary\neviction in otherwise healthy clusters.\n\nFixes #18607\n\nSigned-off-by: Lakshmi Narayanan Sreethar ","shortMessageHtmlLink":"db/config.cc: increment components_memory_reclaim_threshold config de…"}},{"before":"ecfdfd628b326b1d5616ea8c71b98cc3e014daa2","after":"b15c0a32834b4559fa1103b3f54305d012421156","ref":"refs/heads/update-reclaim-threshold-default","pushedAt":"2024-05-29T14:25:36.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"db/config.cc: increment components_memory_reclaim_threshold config default\n\nIncremented the components_memory_reclaim_threshold config's default\nvalue to 0.2 as the previous value was too strict and caused unnecessary\neviction in otherwise healthy clusters.\n\nFixes #18607\n\nSigned-off-by: Lakshmi Narayanan Sreethar ","shortMessageHtmlLink":"db/config.cc: increment components_memory_reclaim_threshold config de…"}},{"before":"8a77a74d0ec6b7e988c7a073e82226889c0bc8ba","after":"532c50624dbef2f4dacd40503ca584c1c007e473","ref":"refs/heads/rcs-n-concurrency","pushedAt":"2024-05-29T12:23:51.000Z","pushType":"push","commitsCount":3,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"reader_concurrency_semaphore: wire in the configurable cpu concurrency\n\nBefore this patch, the semaphore was hard-wired to stop admission, if\nthere is even a single permit, which is in the need_cpu state.\nTherefore, keeping the CPU concurrency at 1.\nThis patch makes use of the new cpu_concurrency parameter, which was\nwired in in the last patches, allowing for a configurable amount of\nconcurrent need_cpu permits. This is to address workloads where some\nsmall subset of reads are expected to be slow, and can hold up faster\nreads behind them in the semaphore queue.","shortMessageHtmlLink":"reader_concurrency_semaphore: wire in the configurable cpu concurrency"}},{"before":null,"after":"ecfdfd628b326b1d5616ea8c71b98cc3e014daa2","ref":"refs/heads/update-reclaim-threshold-default","pushedAt":"2024-05-29T11:20:27.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"db/config.cc: increment components_memory_reclaim_threshold config default\n\nIncremented the components_memory_reclaim_threshold config's default\nvalue to 0.2 as the previous value was too strict and caused unnecessary\neviction in otherwise healthy clusters.\n\nFixes #18607\n\nSigned-off-by: Lakshmi Narayanan Sreethar ","shortMessageHtmlLink":"db/config.cc: increment components_memory_reclaim_threshold config de…"}},{"before":"067c2c66f12bc780ec3481a6267542e62eeffd4e","after":"85f333c99a69cd5eb767b3073506852b68878de6","ref":"refs/heads/compaction-cell-stats","pushedAt":"2024-05-29T10:37:41.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"querier: consume_page(): add rate-limiting to tombstone warnings\n\nThese warnings can be logged once per query, which could result in\nfilling the logs with thousands of log lines.\nRate-limit to once per 10sec.","shortMessageHtmlLink":"querier: consume_page(): add rate-limiting to tombstone warnings"}},{"before":null,"after":"8a77a74d0ec6b7e988c7a073e82226889c0bc8ba","ref":"refs/heads/rcs-n-concurrency","pushedAt":"2024-05-29T10:07:03.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"cql: fix a crash lurking in `ks_prop_defs::get_initial_tablets`\n\n`tablets_options->erase(it);` invalidates `it`, but it's still referred\nto later in the code in the last `else`, and when that code is invoked,\nwe get a `heap-use-after-free` crash.\n\nFixes: #18926\n\nCloses scylladb/scylladb#18936","shortMessageHtmlLink":"cql: fix a crash lurking in ks_prop_defs::get_initial_tablets"}},{"before":null,"after":"067c2c66f12bc780ec3481a6267542e62eeffd4e","ref":"refs/heads/compaction-cell-stats","pushedAt":"2024-05-29T10:05:22.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"querier: consume_page(): add rate-limiting to tombstone warnings\n\nThese warnings can be logged once per query, which could result in\nfilling the logs with thousands of log lines.\nRate-limit to once per 10sec.","shortMessageHtmlLink":"querier: consume_page(): add rate-limiting to tombstone warnings"}},{"before":"aa075313ecd559badc52c2ada3831d09dc4d1759","after":"1fe8f22d89d71b82691c7ec6ece9cd3af4ee9d72","ref":"refs/heads/massaging-service-streaming-tenant","pushedAt":"2024-05-28T14:58:36.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"alternator, scheduler: test reproducing RPC scheduling group bug\n\nThis patch adds a test for issue #18719: Although the Alternator TTL\nwork is supposedly done in the \"streaming\" scheduling group, it turned\nout we had a bug where work sent on behalf of that code to other nodes\nfailed to inherit the correct scheduling group, and was done in the\nnormal (\"statement\") group.\n\nBecause this problem only happens when more than one node is involved,\nthe test is in the multi-node test framework test/topology_experimental_raft.\n\nThe test uses the Alternator API. We already had in that framework a\ntest using the Alternator API (a test for alternator+tablets), so in\nthis patch we move the common Alternator utility functions to a common\nfile, test_alternator.py, where I also put the new test.\n\nThe test is based on metrics: We write expiring data, wait for it to expire,\nand then check the metrics on how much CPU work was done in the wrong\nscheduling group (\"statement\"). Before #18719 was fixed, a lot of work\nwas done there (more than half of the work done in the right group).\nAfter the issue was fixed in the previous patch, the work on the wrong\nscheduling group went down to zero.\n\nSigned-off-by: Nadav Har'El ","shortMessageHtmlLink":"alternator, scheduler: test reproducing RPC scheduling group bug"}},{"before":"dae5aec3326826626a3b277a9de9309d78f749dd","after":"aa075313ecd559badc52c2ada3831d09dc4d1759","ref":"refs/heads/massaging-service-streaming-tenant","pushedAt":"2024-05-28T14:09:01.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"alternator, scheduler: test reproducing RPC scheduling group bug\n\nThis patch adds a test for issue #18719: Although the Alternator TTL\nwork is supposedly done in the \"streaming\" scheduling group, it turned\nout we had a bug where work sent on behalf of that code to other nodes\nfailed to inherit the correct scheduling group, and was done in the\nnormal (\"statement\") group.\n\nBecause this problem only happens when more than one node is involved,\nthe test is in the multi-node test framework test/topology_experimental_raft.\n\nThe test uses the Alternator API. We already had in that framework a\ntest using the Alternator API (a test for alternator+tablets), so in\nthis patch we move the common Alternator utility functions to a common\nfile, test_alternator.py, where I also put the new test.\n\nThe test is based on metrics: We write expiring data, wait for it to expire,\nand then check the metrics on how much CPU work was done in the wrong\nscheduling group (\"statement\"). Before #18719 was fixed, a lot of work\nwas done there (more than half of the work done in the right group).\nAfter the issue was fixed in the previous patch, the work on the wrong\nscheduling group went down to zero.\n\nSigned-off-by: Nadav Har'El ","shortMessageHtmlLink":"alternator, scheduler: test reproducing RPC scheduling group bug"}},{"before":"cbd1fdbe133778801e83914d362f1c9a3dfa4340","after":"81bf6ae7f94fb47eff7c7fa882208e376ad9f349","ref":"refs/heads/compaction-reader-next-partition-fix-test","pushedAt":"2024-05-23T10:06:09.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"test/boost/mutation_reader_test: compacting_reader_next_partition: fix partition order\n\nThe test creates two partitions and passes them through the reader, but\nthe partitions are out-of-order. This is benign but best to fix it\nanyway.\nFound after bumping validation level inside the compactor.","shortMessageHtmlLink":"test/boost/mutation_reader_test: compacting_reader_next_partition: fi…"}},{"before":null,"after":"cbd1fdbe133778801e83914d362f1c9a3dfa4340","ref":"refs/heads/compaction-reader-next-partition-fix-test","pushedAt":"2024-05-23T10:04:47.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"test/boost/mutation_reader_test: compacting_reader_next_partition: fix partition order\n\nThe test creates two partitions and passes them through the reader, but\nthe partitions are out-of-order. This is benign but best to fix it\nanyway.\nFound after bumping validation level inside the compactor.","shortMessageHtmlLink":"test/boost/mutation_reader_test: compacting_reader_next_partition: fi…"}},{"before":"0cc3b92ea101ded5cd2a3cd93af1a401650febaa","after":"d049f681aee9564f4f640e9888cf18f46778cc5a","ref":"refs/heads/update-tools-java-regen-image","pushedAt":"2024-05-22T04:40:11.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"Update tools/java submodule\n\n* tools/java 4ee15fd9...88809606 (2):\n > Update Scylla Java driver to 3.11.5.3.\n > install-dependencies.sh: s/python/python3/\n\n[botond: regenerate toolchain image]","shortMessageHtmlLink":"Update tools/java submodule"}},{"before":"70ff0e7a521609fb3b6221db4f0d82a3ecaaa2ba","after":"0cc3b92ea101ded5cd2a3cd93af1a401650febaa","ref":"refs/heads/update-tools-java-regen-image","pushedAt":"2024-05-21T10:03:10.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"Update tools/java submodule\n\n* tools/java 4ee15fd9ea...88809606c8 (11):\n > Update Scylla Java driver to 3.11.5.3.\n > install-dependencies.sh: s/python/python3/\n\n[botond: regenerate toolchain image]","shortMessageHtmlLink":"Update tools/java submodule"}},{"before":null,"after":"70ff0e7a521609fb3b6221db4f0d82a3ecaaa2ba","ref":"refs/heads/update-tools-java-regen-image","pushedAt":"2024-05-21T10:02:12.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"Update tools/java sumbodule\n\n* tools/java 4ee15fd9ea...88809606c8 (11):\n > Update Scylla Java driver to 3.11.5.3.\n > install-dependencies.sh: s/python/python3/\n\n[botond: regenerate toolchain image]","shortMessageHtmlLink":"Update tools/java sumbodule"}},{"before":"0297676433670285bd60d89d520b2fdef5b63541","after":"11fa79a53725d153af754c12f2d5d0d3efd64e3a","ref":"refs/heads/isolation.md-update","pushedAt":"2024-05-21T07:12:27.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"docs: isolation.md: add section on RPC call isolation","shortMessageHtmlLink":"docs: isolation.md: add section on RPC call isolation"}},{"before":"18e15d3ce18791bea53ab74c0b790dc54c705e70","after":"dae5aec3326826626a3b277a9de9309d78f749dd","ref":"refs/heads/massaging-service-streaming-tenant","pushedAt":"2024-05-20T13:55:39.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"alternator, scheduler: test reproducing RPC scheduling group bug\n\nThis patch adds a test for issue #18719: Although the Alternator TTL\nwork is supposedly done in the \"streaming\" scheduling group, it turned\nout we had a bug where work sent on behalf of that code to other nodes\nfailed to inherit the correct scheduling group, and was done in the\nnormal (\"statement\") group.\n\nBecause this problem only happens when more than one node is involved,\nthe test is in the multi-node test framework test/topology_experimental_raft.\n\nThe test uses the Alternator API. We already had in that framework a\ntest using the Alternator API (a test for alternator+tablets), so in\nthis patch we move the common Alternator utility functions to a common\nfile, test_alternator.py, where I also put the new test.\n\nThe test is based on metrics: We write expiring data, wait for it to expire,\nand then check the metrics on how much CPU work was done in the wrong\nscheduling group (\"statement\"). Before #18719 was fixed, a lot of work\nwas done there (more than half of the work in the right group). After it\nwas fixed, the work on the wrong scheduling group went down to zero.\n\nThe test relies *slightly* on timing: It needs write of 100 rows to\nfinish in 2 seconds, and their deletion to finish in 2 second.\nI believe that these durations will be enough even in very slow\ndebug runs.\n\nSigned-off-by: Nadav Har'El ","shortMessageHtmlLink":"alternator, scheduler: test reproducing RPC scheduling group bug"}},{"before":"0fa0f77fed61828ef69e84b16cd9a195aa022e32","after":"0297676433670285bd60d89d520b2fdef5b63541","ref":"refs/heads/isolation.md-update","pushedAt":"2024-05-20T13:04:45.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"docs: isolation.md: add section on RPC call isolation","shortMessageHtmlLink":"docs: isolation.md: add section on RPC call isolation"}},{"before":"18cd5a0c2712249ee533cf27f939e0f678179bbb","after":"0fa0f77fed61828ef69e84b16cd9a195aa022e32","ref":"refs/heads/isolation.md-update","pushedAt":"2024-05-20T12:35:36.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"docs: isolation.md: add section on RPC call isolation","shortMessageHtmlLink":"docs: isolation.md: add section on RPC call isolation"}},{"before":null,"after":"18cd5a0c2712249ee533cf27f939e0f678179bbb","ref":"refs/heads/isolation.md-update","pushedAt":"2024-05-20T08:20:09.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"docs: isolation.md: add section on RPC call isolation","shortMessageHtmlLink":"docs: isolation.md: add section on RPC call isolation"}},{"before":null,"after":"18e15d3ce18791bea53ab74c0b790dc54c705e70","ref":"refs/heads/massaging-service-streaming-tenant","pushedAt":"2024-05-17T12:31:59.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"main: add maintenance tenant to messaging_service's scheduling config\n\nCurrently only the user tenant (statement scheduling group) and system\n(default scheduling group) tenants exist, as we used to have only\nuser-initiated operations and sytem (internal) ones. Now there is need\nto distinguish between two kinds of system operation: foreground and\nbackground ones. The former should use the system tenant while the\nlatter will use the new maintenance tenant (streaming scheduling group).\n\nFixes: #18719","shortMessageHtmlLink":"main: add maintenance tenant to messaging_service's scheduling config"}},{"before":"a8dbe997d2662e70126f4ff22585bfea13b6f6fb","after":"8f617b3663cf6ad4ee779e43e92c650be751fcd5","ref":"refs/heads/streaming-compact-live-update","pushedAt":"2024-05-17T12:02:58.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"test/topology_custom: add test for enable_compacting_data_for_streaming_and_repair live-update\n\nAvoid this the live-update feature of this config item breaking\nsilently.","shortMessageHtmlLink":"test/topology_custom: add test for enable_compacting_data_for_streami…"}},{"before":null,"after":"3bf4e4ccdbe5c098d3d1f9a31d19612c3313e986","ref":"refs/heads/compacting-reader-rm-ignore-partition-end","pushedAt":"2024-05-17T10:02:35.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"readers: compacting_reader: remove unused _ignore_partition_end\n\nThis member is read-only since ac44efea11f53804eaf021d01445bc9a25c4019d\nso remove it.","shortMessageHtmlLink":"readers: compacting_reader: remove unused _ignore_partition_end"}},{"before":null,"after":"a8dbe997d2662e70126f4ff22585bfea13b6f6fb","ref":"refs/heads/streaming-compact-live-update","pushedAt":"2024-05-16T07:46:53.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"test/topology_custom: add test for enable_compacting_data_for_streaming_and_repair live-update\n\nAvoid this the live-update feature of this config item breaking\nsilently.","shortMessageHtmlLink":"test/topology_custom: add test for enable_compacting_data_for_streami…"}},{"before":null,"after":"5fe2d750bab8df7c5b60bb0ad2e6ff2fbfa1c755","ref":"refs/heads/tombstone-limit-tablets","pushedAt":"2024-05-14T12:18:03.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"test/cql-pytest: test_tombstone_limit.py: enable xfailing tests\n\nThese tests were marked as xfail because they use to fail with tablets.\nThey don't anymore, so remove the xfail.\n\nFixes: #16486","shortMessageHtmlLink":"test/cql-pytest: test_tombstone_limit.py: enable xfailing tests"}},{"before":null,"after":"78afb3644cb9f610bde59fd2cbafe71c9e616fbc","ref":"refs/heads/mutation-validator-true-none","pushedAt":"2024-05-14T10:15:58.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"test/boost/mutation_fragment_test.cc: add test for validator validation levels\n\nTo make sure that the validator doesn't validate what the validation\nlevel doesn't include.","shortMessageHtmlLink":"test/boost/mutation_fragment_test.cc: add test for validator validati…"}},{"before":"329634ad343c6d2f6c719e975b4b7ca57632d3b3","after":"3392b7a494fee84a0ad6e2bec809f93bd23b3d49","ref":"refs/heads/nodetool-status-doc-ks-tbl","pushedAt":"2024-05-13T12:28:04.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"docs: nodetool status: document keyspace and table arguments\n\nAlso fix the example nodetool status invocation.\n\nFixes: #17840","shortMessageHtmlLink":"docs: nodetool status: document keyspace and table arguments"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEWSVnJQA","startCursor":null,"endCursor":null}},"title":"Activity · denesb/scylla"}