Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Retry backoff reset #1978

Closed
swimmesberger opened this issue Nov 28, 2019 · 2 comments · Fixed by #1979
Closed

[Enhancement] Retry backoff reset #1978

swimmesberger opened this issue Nov 28, 2019 · 2 comments · Fixed by #1979
Labels
type/enhancement A general enhancement
Milestone

Comments

@swimmesberger
Copy link

swimmesberger commented Nov 28, 2019

Currently using retryBackoff on a hot source is kind of strange because the internal backoff counter is not reset. I would like to have the ability to say "If a error happens retry with a backoff until a onNext item is emitted. Next time an error happens - backoff should start with the first backoff value again"

Motivation

For hot sources it kind of makes sense to connect to something get some values but when an error happens to completely resubscribe with a backoff. E.g. connection to a service which goes down - retrying with backoff and jitter - and when the connection is successful again and later the service goes down again I want a immediate retry not the time it was previously at.

Desired solution

Somehow changing the retryBackoff operate do signal a reset of the internal throwable publisher.

Considered alternatives

I haven't found any suiting alternative for retryBackoff which works for this usecase. Maybe I have misunderstood something and it's possible with some combination of existing operators.
I know that it would be possible when I differentiate the flux which is providing the data and the mono which establishes the connection but it feels kind of strange to do that when retryBackoff is working fine in this usecase but simply do not support resetting the backoff at some point.

Example:

  public static void main(String[] args) throws InterruptedException {
    Flux.<Long, AtomicLong> generate(() -> new AtomicLong(0), (counter, s) -> {
      try {
        long currentCount = counter.incrementAndGet();
        if (currentCount % 10 == 0) {
          System.out.println("Error!");
          s.error(new RuntimeException("Error!"));
        } else {
          s.next(currentCount);
        }
        Thread.sleep(100);
      } catch (InterruptedException e) {
        s.complete();
      }
      return counter;
    }).retryBackoff(Long.MAX_VALUE, Duration.ofSeconds(1), Duration.ofMinutes(1)).subscribe(val -> {
      System.out.println(val);
    });
    synchronized (Thread.currentThread()) {
      Thread.currentThread().wait();
    }
  }

When you run this example - you see that it will take longer and longer to see some values again. In my usecase I want the wait time between resubscribing to always be the same. As long as errors are happening the time should be higher each time, on a success it should be reset.

@simonbasle
Copy link
Member

Something puzzles me though: a hot source is still terminated by a onError signal, so I don't understand how that aspect comes into account. A pure hot source couldn't be retried at all, by definition. Something like a ConnectableFlux (or the result of applying the share() operator) would attempt to reconnect upstream if that is possible.

@swimmesberger
Copy link
Author

swimmesberger commented Nov 28, 2019

You can run the sample I have provided at the end to get a better understanding of what I mean. The example prints:

  • 1-10 then ERROR, then it waits 1 second
  • 1-10 then ERROR, then it waits more than 1 second
    ....
  • after a couple of errors it ALWAYS waits 1 minute

What I want is that it always waits 1 second and only in the case that multiple errors occur in a row it should wait longer.

The hot source is terminated that is right but after the retry a new connection is made to the source and data in between when the connection was lost is lost too - but we still want to retry as fast as possible so we don't miss out more data.

@simonbasle simonbasle added the type/enhancement A general enhancement label Jan 28, 2020
simonbasle added a commit that referenced this issue Feb 7, 2020
This introduces a retry variant based on a `Function`, a bit like
retryWhen, except the input is not merely a `Throwable` but a
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors.

The Function is actually provided through a `Supplier`, and one such
supplier is the newly introduced `Retry.Builder`.

The builder is more simple than the one in addons, but covers some good
ground. It allows predicate on either exponential backoff strategy or
simple retry strategy. In both cases one can also chose to consider
`transientError(boolean)` (reset on onNext). For the simple case, this
means that the remaining number of retries is reset in case of onNext.
For the exponential case, this means retry delay is reset to minimum
after an onNext.

Old `retryWhen` decorates the user provided function to only look at
the exception.

We have only 1 builder, that switches from simple to backoff as soon
as one of the backoff configuration methods are invoked. One cannot
easily switch back. Factory methods help select the backoff strategy
right away.

The API is based on a `Supplier<Function>` so that it is not constrained
on the provided `Retry.Builder`: anybody can easily write their own
builder of advanced retry functions.
@simonbasle simonbasle linked a pull request Feb 7, 2020 that will close this issue
simonbasle added a commit that referenced this issue Feb 7, 2020
This introduces a retry variant based on a `Function`, a bit like
retryWhen, except the input is not merely a `Throwable` but a
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors.

The Function is actually provided through a `Supplier`, and one such
supplier is the newly introduced `Retry.Builder`.

The builder is more simple than the one in addons, but covers some good
ground. It allows predicate on either exponential backoff strategy or
simple retry strategy. In both cases one can also chose to consider
`transientError(boolean)` (reset on onNext). For the simple case, this
means that the remaining number of retries is reset in case of onNext.
For the exponential case, this means retry delay is reset to minimum
after an onNext.

Old `retryWhen` decorates the user provided function to only look at
the exception.

We have only 1 builder, that switches from simple to backoff as soon
as one of the backoff configuration methods are invoked. One cannot
easily switch back. Factory methods help select the backoff strategy
right away.

The API is based on a `Supplier<Function>` so that it is not constrained
on the provided `Retry.Builder`: anybody can easily write their own
builder of advanced retry functions.
simonbasle added a commit that referenced this issue Feb 17, 2020
This introduces a retry variant based on a `Function`, a bit like
retryWhen, except the input is not merely a `Throwable` but a
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors.

The Function is actually provided through a `Supplier`, and one such
supplier is the newly introduced `Retry.Builder`.

The builder is more simple than the one in addons, but covers some good
ground. It allows predicate on either exponential backoff strategy or
simple retry strategy. In both cases one can also chose to consider
`transientError(boolean)` (reset on onNext). For the simple case, this
means that the remaining number of retries is reset in case of onNext.
For the exponential case, this means retry delay is reset to minimum
after an onNext.

Old `retryWhen` decorates the user provided function to only look at
the exception.

We have only 1 builder, that switches from simple to backoff as soon
as one of the backoff configuration methods are invoked. One cannot
easily switch back. Factory methods help select the backoff strategy
right away.

The API is based on a `Supplier<Function>` so that it is not constrained
on the provided `Retry.Builder`: anybody can easily write their own
builder of advanced retry functions.
simonbasle added a commit that referenced this issue Feb 18, 2020
This introduces a retry variant based on a `Function`, a bit like
retryWhen, except the input is not merely a `Throwable` but a
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors.

The Function is actually provided through a `Supplier`, and one such
supplier is the newly introduced `Retry.Builder`.

The builder is more simple than the one in addons, but covers some good
ground. It allows predicate on either exponential backoff strategy or
simple retry strategy. In both cases one can also chose to consider
`transientError(boolean)` (reset on onNext). For the simple case, this
means that the remaining number of retries is reset in case of onNext.
For the exponential case, this means retry delay is reset to minimum
after an onNext.

Old `retryWhen` decorates the user provided function to only look at
the exception.

We have only 1 builder, that switches from simple to backoff as soon
as one of the backoff configuration methods are invoked. One cannot
easily switch back. Factory methods help select the backoff strategy
right away.

The API is based on a `Supplier<Function>` so that it is not constrained
on the provided `Retry.Builder`: anybody can easily write their own
builder of advanced retry functions.
simonbasle added a commit that referenced this issue Feb 19, 2020
This introduces a retry variant based on a `Function`, a bit like
retryWhen, except the input is not merely a `Throwable` but a
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors.

The Function is actually provided through a `Supplier`, and one such
supplier is the newly introduced `Retry.Builder`.

The builder is more simple than the one in addons, but covers some good
ground. It allows predicate on either exponential backoff strategy or
simple retry strategy. In both cases one can also chose to consider
`transientError(boolean)` (reset on onNext). For the simple case, this
means that the remaining number of retries is reset in case of onNext.
For the exponential case, this means retry delay is reset to minimum
after an onNext.

Old `retryWhen` decorates the user provided function to only look at
the exception.

We have only 1 builder, that switches from simple to backoff as soon
as one of the backoff configuration methods are invoked. One cannot
easily switch back. Factory methods help select the backoff strategy
right away.

The API is based on a `Supplier<Function>` so that it is not constrained
on the provided `Retry.Builder`: anybody can easily write their own
builder of advanced retry functions.
simonbasle added a commit that referenced this issue Mar 2, 2020
This introduces a retry variant based on a `Function`, a bit like
retryWhen, except the input is not merely a `Throwable` but a
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors.

The Function is actually provided through a `Supplier`, and one such
supplier is the newly introduced `Retry.Builder`.

The builder is more simple than the one in addons, but covers some good
ground. It allows predicate on either exponential backoff strategy or
simple retry strategy. In both cases one can also chose to consider
`transientError(boolean)` (reset on onNext). For the simple case, this
means that the remaining number of retries is reset in case of onNext.
For the exponential case, this means retry delay is reset to minimum
after an onNext.

Old `retryWhen` decorates the user provided function to only look at
the exception.

We have only 1 builder, that switches from simple to backoff as soon
as one of the backoff configuration methods are invoked. One cannot
easily switch back. Factory methods help select the backoff strategy
right away.

The API is based on a `Supplier<Function>` so that it is not constrained
on the provided `Retry.Builder`: anybody can easily write their own
builder of advanced retry functions.
simonbasle added a commit that referenced this issue Mar 9, 2020
This introduces a retry variant based on a `Function`, a bit like
retryWhen, except the input is not merely a `Throwable` but a
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors.

The Function is actually provided through a `Supplier`, and one such
supplier is the newly introduced `Retry.Builder`.

The builder is more simple than the one in addons, but covers some good
ground. It allows predicate on either exponential backoff strategy or
simple retry strategy. In both cases one can also chose to consider
`transientError(boolean)` (reset on onNext). For the simple case, this
means that the remaining number of retries is reset in case of onNext.
For the exponential case, this means retry delay is reset to minimum
after an onNext.

Old `retryWhen` decorates the user provided function to only look at
the exception.

We have only 1 builder, that switches from simple to backoff as soon
as one of the backoff configuration methods are invoked. One cannot
easily switch back. Factory methods help select the backoff strategy
right away.

The API is based on a `Supplier<Function>` so that it is not constrained
on the provided `Retry.Builder`: anybody can easily write their own
builder of advanced retry functions.
simonbasle added a commit that referenced this issue Mar 10, 2020
This introduces a retry variant based on a `Function`, a bit like
retryWhen, except the input is not merely a `Throwable` but a
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors.

The Function is actually provided through a `Supplier`, and one such
supplier is the newly introduced `Retry.Builder`.

The builder is more simple than the one in addons, but covers some good
ground. It allows predicate on either exponential backoff strategy or
simple retry strategy. In both cases one can also chose to consider
`transientError(boolean)` (reset on onNext). For the simple case, this
means that the remaining number of retries is reset in case of onNext.
For the exponential case, this means retry delay is reset to minimum
after an onNext.

Old `retryWhen` decorates the user provided function to only look at
the exception.

We have only 1 builder, that switches from simple to backoff as soon
as one of the backoff configuration methods are invoked. One cannot
easily switch back. Factory methods help select the backoff strategy
right away.

The API is based on a `Supplier<Function>` so that it is not constrained
on the provided `Retry.Builder`: anybody can easily write their own
builder of advanced retry functions.
simonbasle added a commit that referenced this issue Mar 16, 2020
This introduces a retry variant based on a `Function`, a bit like
retryWhen, except the input is not merely a `Throwable` but a
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors.

The Function is actually provided through a `Supplier`, and one such
supplier is the newly introduced `Retry.Builder`.

The builder is more simple than the one in addons, but covers some good
ground. It allows predicate on either exponential backoff strategy or
simple retry strategy. In both cases one can also chose to consider
`transientError(boolean)` (reset on onNext). For the simple case, this
means that the remaining number of retries is reset in case of onNext.
For the exponential case, this means retry delay is reset to minimum
after an onNext.

Old `retryWhen` decorates the user provided function to only look at
the exception.

We have only 1 builder, that switches from simple to backoff as soon
as one of the backoff configuration methods are invoked. One cannot
easily switch back. Factory methods help select the backoff strategy
right away.

The API is based on a `Supplier<Function>` so that it is not constrained
on the provided `Retry.Builder`: anybody can easily write their own
builder of advanced retry functions.
simonbasle added a commit that referenced this issue Mar 18, 2020
This big commit is a large refactor of the `retryWhen` operator in order
to add several features.

Fixes #1978
Fixes #1905
Fixes #2063
Fixes #2052
Fixes #2064

 * Expose more state to `retryWhen` companion (#1978)

This introduces a retryWhen variant based on a `Retry` functional
interface. This "function" deals not with a Flux of `Throwable` but of
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors (transient errors).

We take that opportunity to offer a builder for such a function that
could take transient errors into account.

 * the `Retry` builders

Inspired by the `Retry` builder in addons, we introduce two classes:
`RetrySpec` and `RetryBackoffSpec`. We name them Spec and not Builder
because they don't require to call a `build()` method. Rather, each
configuration step produces A) a new instance (copy on write) that B)
is by itself already a `Retry`.

The `Retry` + `xxxSpec` approach allows us to offer 2 standard
strategies that both support transient error handling, while letting
users write their own strategy (either as a standalone `Retry` concrete
implementation, or as a builder/spec that builds one).

Both specs allow to handle `transientErrors(boolean)`, which when true
relies on the extra state exposed by the `RetrySignal`. For the simple
case, this means that the remaining number of retries is reset in case
of onNext. For the exponential case, this means retry delay is reset to
minimum after an onNext (#1978).

Additionally, the introduction of the specs allows us to add more
features and support some features on more combinations, see below.

 * `filter` exceptions (#1905)

 Previously we could only filter exceptions to be retried on the simple
 long-based `retry` methods. With the specs we can `filter` in both
 immediate and exponential backoff retry strategies.

 * Add pre/post attempt hooks (#2063)

The specs let the user configure two types of pre/post hooks.
Note that if the retry attempt is denied (eg. we've reached the maximum
number of attempts), these hooks are NOT executed.

Synchronous hooks (`doBeforeRetry` and `doAfterRetry`) are side effects
that should not block for too long and are executed right before and
right after the retry trigger is sent by the companion publisher.

Asynchronous hooks (`doBeforeRetryAsync` and `doAfterRetryAsync`) are
composed into the companion publisher which generates the triggers, and
they both delay the emission of said trigger in non-blocking and
asynchronous fashion. Having pre and post hooks allows a user to better
manage the order in which these asynchronous side effect should be
performed.

 * Retry exhausted meaningful exception (#2052)

The `Retry` function implemented by both spec throw a `RuntimeException`
with a meaningful message when the configured maximum amount of attempts
is reached. That exception can be pinpointed by calling the utility
`Exceptions.isRetryExhausted` method.

For further customization, users can replace that default with their
own custom exception via `onRetryExhaustedThrow`. The BiFunction lets
user access the Spec, which has public final fields that can be
used to produce a meaningful message.

 * Ensure retry hooks completion is taken into account (#2064)

The old `retryBackoff` would internally use a `flatMap`, which can
cause issues. The Spec functions use `concatMap`.

 /!\ CAVEAT

This commit deprecates all of the retryBackoff methods as well as the
original `retryWhen` (based on Throwable companion publisher) in order
to introduce the new `RetrySignal` based signature.

The use of `Retry` explicit type lifts any ambiguity when using the Spec
but using a lambda instead will raise some ambiguity at call sites of
`retryWhen`.

We deem that acceptable given that the migration is quite easy
(turn `e -> whatever(e)` to `(Retry) rs -> whatever(rs.failure())`).
Furthermore, `retryWhen` is an advanced operator, and we expect most
uses to be combined with the retry builder in reactor-extra, which lifts
the ambiguity itself.
simonbasle added a commit that referenced this issue Mar 18, 2020
This big commit is a large refactor of the `retryWhen` operator in order
to add several features.

Fixes #1978
Fixes #1905
Fixes #2063
Fixes #2052
Fixes #2064

 * Expose more state to `retryWhen` companion (#1978)

This introduces a retryWhen variant based on a `Retry` functional
interface. This "function" deals not with a Flux of `Throwable` but of
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors (transient errors).

We take that opportunity to offer a builder for such a function that
could take transient errors into account.

 * the `Retry` builders

Inspired by the `Retry` builder in addons, we introduce two classes:
`RetrySpec` and `RetryBackoffSpec`. We name them Spec and not Builder
because they don't require to call a `build()` method. Rather, each
configuration step produces A) a new instance (copy on write) that B)
is by itself already a `Retry`.

The `Retry` + `xxxSpec` approach allows us to offer 2 standard
strategies that both support transient error handling, while letting
users write their own strategy (either as a standalone `Retry` concrete
implementation, or as a builder/spec that builds one).

Both specs allow to handle `transientErrors(boolean)`, which when true
relies on the extra state exposed by the `RetrySignal`. For the simple
case, this means that the remaining number of retries is reset in case
of onNext. For the exponential case, this means retry delay is reset to
minimum after an onNext (#1978).

Additionally, the introduction of the specs allows us to add more
features and support some features on more combinations, see below.

 * `filter` exceptions (#1905)

 Previously we could only filter exceptions to be retried on the simple
 long-based `retry` methods. With the specs we can `filter` in both
 immediate and exponential backoff retry strategies.

 * Add pre/post attempt hooks (#2063)

The specs let the user configure two types of pre/post hooks.
Note that if the retry attempt is denied (eg. we've reached the maximum
number of attempts), these hooks are NOT executed.

Synchronous hooks (`doBeforeRetry` and `doAfterRetry`) are side effects
that should not block for too long and are executed right before and
right after the retry trigger is sent by the companion publisher.

Asynchronous hooks (`doBeforeRetryAsync` and `doAfterRetryAsync`) are
composed into the companion publisher which generates the triggers, and
they both delay the emission of said trigger in non-blocking and
asynchronous fashion. Having pre and post hooks allows a user to better
manage the order in which these asynchronous side effect should be
performed.

 * Retry exhausted meaningful exception (#2052)

The `Retry` function implemented by both spec throw a `RuntimeException`
with a meaningful message when the configured maximum amount of attempts
is reached. That exception can be pinpointed by calling the utility
`Exceptions.isRetryExhausted` method.

For further customization, users can replace that default with their
own custom exception via `onRetryExhaustedThrow`. The BiFunction lets
user access the Spec, which has public final fields that can be
used to produce a meaningful message.

 * Ensure retry hooks completion is taken into account (#2064)

The old `retryBackoff` would internally use a `flatMap`, which can
cause issues. The Spec functions use `concatMap`.

 /!\ CAVEAT

This commit deprecates all of the retryBackoff methods as well as the
original `retryWhen` (based on Throwable companion publisher) in order
to introduce the new `RetrySignal` based signature.

The use of `Retry` explicit type lifts any ambiguity when using the Spec
but using a lambda instead will raise some ambiguity at call sites of
`retryWhen`.

We deem that acceptable given that the migration is quite easy
(turn `e -> whatever(e)` to `(Retry) rs -> whatever(rs.failure())`).
Furthermore, `retryWhen` is an advanced operator, and we expect most
uses to be combined with the retry builder in reactor-extra, which lifts
the ambiguity itself.
simonbasle added a commit that referenced this issue Mar 18, 2020
This big commit is a large refactor of the `retryWhen` operator in order
to add several features.

Fixes #1978
Fixes #1905
Fixes #2063
Fixes #2052
Fixes #2064

 * Expose more state to `retryWhen` companion (#1978)

This introduces a retryWhen variant based on a `Retry` functional
interface. This "function" deals not with a Flux of `Throwable` but of
`RetrySignal`. This allows retry function to check if there was some
success (onNext) since last retry attempt, in which case the current
attempt can be interpreted as if this was the first ever error.

This is especially useful for cases where exponential backoff delays
should be reset, for long lived sequences that only see intermittent
bursts of errors (transient errors).

We take that opportunity to offer a builder for such a function that
could take transient errors into account.

 * the `Retry` builders

Inspired by the `Retry` builder in addons, we introduce two classes:
`RetrySpec` and `RetryBackoffSpec`. We name them Spec and not Builder
because they don't require to call a `build()` method. Rather, each
configuration step produces A) a new instance (copy on write) that B)
is by itself already a `Retry`.

The `Retry` + `xxxSpec` approach allows us to offer 2 standard
strategies that both support transient error handling, while letting
users write their own strategy (either as a standalone `Retry` concrete
implementation, or as a builder/spec that builds one).

Both specs allow to handle `transientErrors(boolean)`, which when true
relies on the extra state exposed by the `RetrySignal`. For the simple
case, this means that the remaining number of retries is reset in case
of onNext. For the exponential case, this means retry delay is reset to
minimum after an onNext (#1978).

Additionally, the introduction of the specs allows us to add more
features and support some features on more combinations, see below.

 * `filter` exceptions (#1905)

 Previously we could only filter exceptions to be retried on the simple
 long-based `retry` methods. With the specs we can `filter` in both
 immediate and exponential backoff retry strategies.

 * Add pre/post attempt hooks (#2063)

The specs let the user configure two types of pre/post hooks.
Note that if the retry attempt is denied (eg. we've reached the maximum
number of attempts), these hooks are NOT executed.

Synchronous hooks (`doBeforeRetry` and `doAfterRetry`) are side effects
that should not block for too long and are executed right before and
right after the retry trigger is sent by the companion publisher.

Asynchronous hooks (`doBeforeRetryAsync` and `doAfterRetryAsync`) are
composed into the companion publisher which generates the triggers, and
they both delay the emission of said trigger in non-blocking and
asynchronous fashion. Having pre and post hooks allows a user to better
manage the order in which these asynchronous side effect should be
performed.

 * Retry exhausted meaningful exception (#2052)

The `Retry` function implemented by both spec throw a `RuntimeException`
with a meaningful message when the configured maximum amount of attempts
is reached. That exception can be pinpointed by calling the utility
`Exceptions.isRetryExhausted` method.

For further customization, users can replace that default with their
own custom exception via `onRetryExhaustedThrow`. The BiFunction lets
user access the Spec, which has public final fields that can be
used to produce a meaningful message.

 * Ensure retry hooks completion is taken into account (#2064)

The old `retryBackoff` would internally use a `flatMap`, which can
cause issues. The Spec functions use `concatMap`.

 /!\ CAVEAT

This commit deprecates all of the retryBackoff methods as well as the
original `retryWhen` (based on Throwable companion publisher) in order
to introduce the new `RetrySignal` based signature.

The use of `Retry` explicit type lifts any ambiguity when using the Spec
but using a lambda instead will raise some ambiguity at call sites of
`retryWhen`.

We deem that acceptable given that the migration is quite easy
(turn `e -> whatever(e)` to `(Retry) rs -> whatever(rs.failure())`).
Furthermore, `retryWhen` is an advanced operator, and we expect most
uses to be combined with the retry builder in reactor-extra, which lifts
the ambiguity itself.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/enhancement A general enhancement
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants