Skip to content

Commit

Permalink
Merge branch 'master' into fix_default_node
Browse files Browse the repository at this point in the history
  • Loading branch information
barshaul committed Nov 29, 2022
2 parents 2388569 + f492f85 commit de43a3a
Show file tree
Hide file tree
Showing 12 changed files with 91 additions and 11 deletions.
3 changes: 3 additions & 0 deletions .github/workflows/integration.yaml
Expand Up @@ -16,6 +16,9 @@ on:
schedule:
- cron: '0 1 * * *' # nightly build

permissions:
contents: read # to fetch code (actions/checkout)

jobs:

dependency-audit:
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/pypi-publish.yaml
Expand Up @@ -4,6 +4,9 @@ on:
release:
types: [published]

permissions:
contents: read # to fetch code (actions/checkout)

jobs:

build_and_package:
Expand Down
5 changes: 5 additions & 0 deletions .github/workflows/release-drafter.yml
Expand Up @@ -6,8 +6,13 @@ on:
branches:
- master

permissions: {}
jobs:
update_release_draft:
permissions:
pull-requests: write # to add label to PR (release-drafter/release-drafter)
contents: write # to create a github release (release-drafter/release-drafter)

runs-on: ubuntu-latest
steps:
# Drafts your next Release notes as Pull Requests are merged into "master"
Expand Down
5 changes: 5 additions & 0 deletions .github/workflows/stale-issues.yml
Expand Up @@ -3,8 +3,13 @@ on:
schedule:
- cron: "0 0 * * *"

permissions: {}
jobs:
stale:
permissions:
issues: write # to close stale issues (actions/stale)
pull-requests: write # to close stale PRs (actions/stale)

runs-on: ubuntu-latest
steps:
- uses: actions/stale@v3
Expand Down
1 change: 1 addition & 0 deletions .readthedocs.yml
Expand Up @@ -3,6 +3,7 @@ version: 2
python:
install:
- requirements: ./docs/requirements.txt
- requirements: requirements.txt

build:
os: ubuntu-20.04
Expand Down
1 change: 1 addition & 0 deletions CHANGES
Expand Up @@ -28,6 +28,7 @@
* Fixed "cannot pickle '_thread.lock' object" bug (#2354, #2297)
* Added CredentialsProvider class to support password rotation
* Enable Lock for asyncio cluster mode
* Fix Sentinel.execute_command doesn't execute across the entire sentinel cluster bug (#2458)
* Added a replacement for the default cluster node in the event of failure (#2463)

* 4.1.3 (Feb 8, 2022)
Expand Down
2 changes: 2 additions & 0 deletions docs/backoff.rst
@@ -1,3 +1,5 @@
.. _backoff-label:

Backoff
#############

Expand Down
5 changes: 0 additions & 5 deletions docs/examples/connection_examples.ipynb
Expand Up @@ -116,15 +116,13 @@
"user_connection.ping()"
],
"metadata": {}
}
},
{
"cell_type": "markdown",
"source": [
"## Connecting to a redis instance with standard credential provider"
],
"metadata": {}
}
},
{
"cell_type": "code",
Expand Down Expand Up @@ -162,15 +160,13 @@
"user_connection.ping()"
],
"metadata": {}
}
},
{
"cell_type": "markdown",
"source": [
"## Connecting to a redis instance first with an initial credential set and then calling the credential provider"
],
"metadata": {}
}
},
{
"cell_type": "code",
Expand Down Expand Up @@ -200,7 +196,6 @@
"cred_provider = InitCredsSetCredentialProvider(username=\"init_user\", password=\"init_pass\")"
],
"metadata": {}
}
},
{
"cell_type": "markdown",
Expand Down
2 changes: 1 addition & 1 deletion docs/exceptions.rst
@@ -1,4 +1,4 @@

.. _exceptions-label:

Exceptions
##########
Expand Down
67 changes: 66 additions & 1 deletion docs/retry.rst
Expand Up @@ -2,4 +2,69 @@ Retry Helpers
#############

.. automodule:: redis.retry
:members:
:members:


Retry in Redis Standalone
**************************

>>> from redis.backoff import ExponentialBackoff
>>> from redis.retry import Retry
>>> from redis.client import Redis
>>> from redis.exceptions import (
>>> BusyLoadingError,
>>> ConnectionError,
>>> TimeoutError
>>> )
>>>
>>> # Run 3 retries with exponential backoff strategy
>>> retry = Retry(ExponentialBackoff(), 3)
>>> # Redis client with retries on custom errors
>>> r = Redis(host='localhost', port=6379, retry=retry, retry_on_error=[BusyLoadingError, ConnectionError, TimeoutError])
>>> # Redis client with retries on TimeoutError only
>>> r_only_timeout = Redis(host='localhost', port=6379, retry=retry, retry_on_timeout=True)

As you can see from the example above, Redis client supports 3 parameters to configure the retry behaviour:

* ``retry``: :class:`~.Retry` instance with a :ref:`backoff-label` strategy and the max number of retries
* ``retry_on_error``: list of :ref:`exceptions-label` to retry on
* ``retry_on_timeout``: if ``True``, retry on :class:`~.TimeoutError` only

If either ``retry_on_error`` or ``retry_on_timeout`` are passed and no ``retry`` is given,
by default it uses a ``Retry(NoBackoff(), 1)`` (meaning 1 retry right after the first failure).


Retry in Redis Cluster
**************************

>>> from redis.backoff import ExponentialBackoff
>>> from redis.retry import Retry
>>> from redis.cluster import RedisCluster
>>>
>>> # Run 3 retries with exponential backoff strategy
>>> retry = Retry(ExponentialBackoff(), 3)
>>> # Redis Cluster client with retries
>>> rc = RedisCluster(host='localhost', port=6379, retry=retry, cluster_error_retry_attempts=2)

Retry behaviour in Redis Cluster is a little bit different from Standalone:

* ``retry``: :class:`~.Retry` instance with a :ref:`backoff-label` strategy and the max number of retries, default value is ``Retry(NoBackoff(), 0)``
* ``cluster_error_retry_attempts``: number of times to retry before raising an error when :class:`~.TimeoutError` or :class:`~.ConnectionError` or :class:`~.ClusterDownError` are encountered, default value is ``3``

Let's consider the following example:

>>> from redis.backoff import ExponentialBackoff
>>> from redis.retry import Retry
>>> from redis.cluster import RedisCluster
>>>
>>> rc = RedisCluster(host='localhost', port=6379, retry=Retry(ExponentialBackoff(), 6), cluster_error_retry_attempts=1)
>>> rc.set('foo', 'bar')

#. the client library calculates the hash slot for key 'foo'.
#. given the hash slot, it then determines which node to connect to, in order to execute the command.
#. during the connection, a :class:`~.ConnectionError` is raised.
#. because we set ``retry=Retry(ExponentialBackoff(), 6)``, the client tries to reconnect to the node up to 6 times, with an exponential backoff between each attempt.
#. even after 6 retries, the client is still unable to connect.
#. because we set ``cluster_error_retry_attempts=1``, before giving up, the client starts a cluster update, removes the failed node from the startup nodes, and re-initializes the cluster.
#. after the cluster has been re-initialized, it starts a new cycle of retries, up to 6 retries, with an exponential backoff.
#. if the client can connect, we're good. Otherwise, the exception is finally raised to the caller, because we've run out of attempts.
4 changes: 2 additions & 2 deletions redis/sentinel.py
Expand Up @@ -200,10 +200,10 @@ def execute_command(self, *args, **kwargs):
kwargs.pop("once")

if once:
random.choice(self.sentinels).execute_command(*args, **kwargs)
else:
for sentinel in self.sentinels:
sentinel.execute_command(*args, **kwargs)
else:
random.choice(self.sentinels).execute_command(*args, **kwargs)
return True

def __repr__(self):
Expand Down
4 changes: 2 additions & 2 deletions tests/test_asyncio/test_cluster.py
Expand Up @@ -2613,18 +2613,18 @@ async def test_can_run_concurrent_pipelines(self, r: RedisCluster) -> None:
)

@pytest.mark.onlycluster
async def test_cluster_pipeline_with_default_node_error_command(self, r):
async def test_pipeline_with_default_node_error_command(self, create_redis):
"""
Test that the default node is being replaced when it raises a relevant exception
"""
r = await create_redis(cls=RedisCluster, flushdb=False)
curr_default_node = r.get_default_node()
err = ConnectionError("error")
cmd_count = await r.command_count()
mock_node_resp_exc(curr_default_node, err)
async with r.pipeline(transaction=False) as pipe:
pipe.command_count()
result = await pipe.execute(raise_on_error=False)

assert result[0] == err
assert r.get_default_node() != curr_default_node
pipe.command_count()
Expand Down

0 comments on commit de43a3a

Please sign in to comment.