Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test Max Retries #135

Closed
allenwyma opened this issue Jan 3, 2017 · 26 comments · Fixed by #477 or #536
Closed

Test Max Retries #135

allenwyma opened this issue Jan 3, 2017 · 26 comments · Fixed by #477 or #536
Assignees

Comments

@allenwyma
Copy link

I made an HTTP adapter:

session = requests.Session()
session.verify = False

# Try max of 3 times before erroring out
retry_adapter = requests.adapters.HTTPAdapter(max_retries=3)
session.mount('https://', retry_adapter)

I wanted to make sure it will automatically retry, but it seems it's not getting triggered right:

exception = ConnectionError('Bad connection')
with responses.RequestsMock(assert_all_requests_are_fired=False) as rsp:
   rsp.add(responses.POST, url, body=exception)
   rsp.add(responses.POST, url, body=exception)
   rsp.add(responses.POST, url, body='{}')
   session.post(url=url, json={})

It just keeps raising the exception.

@tuukkamustonen
Copy link

Looks like responses mocks the adapter.send() call in Session.send() and re-routes it into RequestsMock, that just doesn't support retrying...

@lsh-0
Copy link

lsh-0 commented May 29, 2017

just encountered this myself

@cjw296
Copy link

cjw296 commented Oct 19, 2017

me too, doesn't look like anyone from Sentry is monitoring this issue tracker though...

@kathawala
Copy link
Contributor

I have a max-retries branch on my fork (https://github.com/kathawala/responses/tree/max-retries) which allows this to run without raising an exception:

import responses
import requests

exception = ConnectionError('Bad connection')
url = 'https://www.example.com'
with responses.RequestsMock(assert_all_requests_are_fired=False, max_retries=3) as rsp:
   rsp.add(responses.POST, url, body=exception)
   rsp.add(responses.POST, url, body=exception)
   rsp.add(responses.POST, url, body=exception)
   rsp.add(responses.POST, url, body='{}')
   requests.post(url=url, json={})

But what's the use case for this? Just want to know before I submit a PR. It's not clear to me how this could be helpful.

@cjw296
Copy link

cjw296 commented Oct 27, 2017

@kathawala - I configured a max_retries as Retry() instance and I wanted to check it would work as I expected. Would your branch help with that?

@kathawala
Copy link
Contributor

@cjw296 Hmm, I'm not exactly sure what you mean. What do you mean you "configured a max_retries as Retry() instance"? Code examples would help me understand :)

@cjw296
Copy link

cjw296 commented Oct 30, 2017

here's the code I'd like to test:

from requests.adapters import HTTPAdapter
from requests.packages.urllib3 import Retry
...
        retry = Retry(
            total=5,
            status=3,
            status_forcelist=[404],
            backoff_factor=0.5,
        )
        adapter = HTTPAdapter(max_retries=retry)
        self.session.mount(self.base_url+ASYNC_CHECK_BASE_URL, adapter)

@dcramer
Copy link
Member

dcramer commented Oct 30, 2017

I'm still confused as to what this is testing. To me it seems like this is testing "does requests retry correctly", which is not what this library should care about. If that's the case, and im not misunderstanding, I'd prefer we not add the extra functionality. If there is indeed some world where it makes sense to have adapters that fail/retry within a test suite, we can consider landing it, but right now this smells like a bad approach.

@kathawala
Copy link
Contributor

I have the same gut feeling as well that it seems unnecessary and likely to cause problems later on. I would also like to know what this is useful for (i.e. what is the use case under which this is desired functionality?). Can you explain the use case?

Anyways, to answer your question, the properties of retry other than total will be ignored on my branch, at the moment.

@cjw296
Copy link

cjw296 commented Oct 31, 2017

@kathawala - the use case here is an async api that returns 404 until a result is ready. I don't particularly like the api, but I have no control over it.

@dcramer - max_retries, in particular when using a Retry object, has quite a lot of complicated configuration and so I'd like to have high level tests that verify I have the configuration correct.
In this case, I'd like to put in a set of mock responses for a url, returning combinations of connection errors, 404s and 200s. I'd also mock out whatever Retry is using as it's source of sleep. That way, I can check that the code retries-on-404 a max of 3 times with an incrementing delay between each request.

Responses has exactly the api that I'm looking for, it's just that when using it, all Retry is currently, understandably, ignored.

Put differently: how would you test that the Retry configuration above is behaving as required and expected?

@dcramer
Copy link
Member

dcramer commented Oct 31, 2017

@cjw296 you could test that you are calling the Retry API correctly, and lock it in (by proving it actually works against a real HTTP request). That could be as easy as mocking the Retry object that's called and asserting the calling params. I think its important to understand that the goal of responses isn't to implement a full webserver, but to make simple HTTP tests easy (vs mocking out the responses on your own).

I'm ok if we implement retries, but it needs to be implemented per the full requests spec and pass through correctly. Given that it may end up being far too complex for this one-off request.

@cjw296
Copy link

cjw296 commented Oct 31, 2017

@dcramer - I don't think calling the Retry API correctly is very useful, you end up with symmetric testing that doesn't check that the retrying you've configured behaves as you expect.

I'm glad the goal of responses isn't to to implement a full webserver, I looked at pytest-httpbin for that, and the dependencies were pretty terrifying! Not to mention that backoff delays and the like would take their actual time, rather than being mocked out.

I agree that it would ideally be per the full requests spec, and may well end up being too complex for the demand so far. Would be great to keep this issue open for others who also want this to chime in...

@foxx
Copy link

foxx commented Dec 28, 2017

Just ran into this issue myself, it's somewhat surprising that using a mocked response results in unexpected behaviour. (e.g. expected requests functionality is effectively being removed when using a mocked response)

@k4nar
Copy link

k4nar commented Jan 3, 2018

Same here. And I think I have a valid use case:

I want to retry a query in some cases, and behave differently otherwise. For example I want to retry if I have a 503 and abort everything if the max number of retries was exceeded, or avoid the retry and fail gracefully if I have a 404.

In order to detect if I was failing due to too many retries, I was catching MaxRetryError. But requests actually encapsulate this error, so this code was never called.
I couldn't see this behavior in my tests because I wasn't able to simulate a situation where a requests was raising a MaxRetryError.

@belthaZornv
Copy link

Same here, trying to test a session with Retry.

@ghost
Copy link

ghost commented Dec 30, 2019

Hi, is there any update on the status of this?

@markstory
Copy link
Member

@smacpher-myst no.

@torbjorv
Copy link

torbjorv commented Jun 5, 2020

I ended up with this...
`

@mock.patch('mypackage.myclient.MyClient._url')
def test_http_retry(self, mock_url):
    # given
    mock_url.return_value = 'https://httpbin.org/status/503'
    my_client = MyClient()

    # when/then
    self.assertRaisesRegex(
        RetryError,
        '.*Max retries exceeded.*',
        my_client.do_something
    )

`
It's far from perfect, but at least it verifies that retries were actually performed at all. If you need to verify the actual retry-config, that's probably better done directly against the http-adapter instance.

@mazulo
Copy link

mazulo commented Jun 15, 2020

So... I spent the last few days trying to achieve this with responses, read this and several other issues and couldn't make it work. In the end httpretty was my final solution with the following code:

@httpretty.activate
def test_with_callback_response():
    def request_callback(request, uri, response_headers):
        return [500, response_headers, json.dumps({"hello": "world"})]

    http_adapter = TimeoutHTTPAdapter()
    session = requests.Session()
    session.mount('http://', adapter=http_adapter)
    session.mount('https://', adapter=http_adapter)

    httpretty.register_uri(
        httpretty.GET,
        re.compile(r'http://.*'),
        responses=[
            httpretty.Response(
                body='{"message": "HTTPretty :)"}',
                status=403,
            ),
            httpretty.Response(
                body='{"message": "HTTPretty :)"}',
                status=429,
            ),
            httpretty.Response(
                body='{"message": "HTTPretty :)"}',
                status=500,
            ),
        ]
    )

    try:
        response = session.get("http://example.co/status/")
    except Exception as e:
        print(len(httpretty.latest_requests()))

The line under except will print 4 because max_retries=3 on my TimeoutHTTPAdapter.

@pbelskiy
Copy link

pbelskiy commented Nov 5, 2020

+1

Retry don't work with responses lib, so I think at least it would be nice to write about such limitation in readme or docs.

#!/usr/local/bin/python3

import requests
import responses

from requests.packages.urllib3.util.retry import Retry


@responses.activate
def test():
    responses.add(responses.GET, 'https://google.com', body='Error', status=500)
    responses.add(responses.GET, 'https://google.com', body='OK', status=200)

    session = requests.Session()

    adapter = requests.adapters.HTTPAdapter(max_retries=Retry(
        total=2,
        backoff_factor=0.1,
        status_forcelist=[500],
        method_whitelist=['GET', 'POST', 'PATCH']
    ))
    session.mount('http://', adapter)
    session.mount('https://', adapter)

    r = session.get('https://google.com')
    print(r)


test()

ao-picterra added a commit to Picterra/picterra-python that referenced this issue Jan 4, 2021
We cannot use responses package to test timeouts and retries of
requests (getsentry/responses#135), so
we get rid of it and always use httpretty mocking.
@imjuanleonard
Copy link

imjuanleonard commented Feb 19, 2021

Sadly, I encounter this issue as well.

I need to pool to check whether my cloud resources had been created and need to test the case when retry happened successfully.
Do you have any suggestions @markstory?

Edit: Found a way from codementor, I patched it

        def _find_match(self, request):
            for match in self._urls:
                if request.method == match['method'] and \
                        self._has_url_match(match, request.url):
                    return match

        def _find_match_patched(self, request):
            for index, match in enumerate(self._urls):
                if request.method == match['method'] and \
                        self._has_url_match(match, request.url):
                    if request.method == "GET" and request.url == "/v1/models/1/versions/1/jobs/1":
                        return self._urls.pop(index)
                    else:
                        return match
        responses._find_match = types.MethodType(_find_match_patched, responses)
        ...
        # For example for 5 times retry, you can use stack of responses add and it will call 1 by 1 because of the pop index
        responses.add()
        responses.add()
        responses.add()
        responses.add()
        responses.add()
        ...
        # Remember to unpatch later
        responses._find_match = types.MethodType(_find_match, responses)
        
        

@Forty-Bot
Copy link

This is not closed by #477.

@beliaev-maksim
Copy link
Collaborator

beliaev-maksim commented Apr 7, 2022

@Forty-Bot
please go ahead and test https://github.com/beliaev-maksim/responses/tree/mbeliaev/max_retry

see docs: https://github.com/beliaev-maksim/responses/tree/mbeliaev/max_retry#validate-retry-mechanism

@Forty-Bot
Copy link

Forty-Bot commented Apr 7, 2022

Does not appear to work. My test is something like

import pytest, requests, requests.adapters, responses, urllib3.util

@responses.activate(registry=responses.registries.OrderedRegistry)
def test_retry():
    responses.add(responses.GET, url="https://foo.com/bar", status=429)
    responses.add(responses.GET, url="https://foo.com/bar", status=429)
    responses.add(responses.GET, url="https://foo.com/bar", status=429)
    responses.add(responses.GET, url="https://foo.com/bar", status=200)
    responses.add(responses.GET, url="https://foo.com/baz", status=429)
    responses.add(responses.GET, url="https://foo.com/baz", status=429)
    responses.add(responses.GET, url="https://foo.com/baz", status=429)
    responses.add(responses.GET, url="https://foo.com/baz", status=429)

    s = requests.Session()
    retries = urllib3.util.Retry(total=4, backoff_factor=0.1,
                                 status_forcelist=(requests.codes.too_many,))
    s.mount("https://", requests.adapters.HTTPAdapter(max_retries=retries))

    s.get("https://foo.com/bar").raise_for_status()

    with pytest.raises(Exception):
        s.get("https://foo.com/baz").raise_for_status()

    assert len(responses.calls) == 8

It seems very similar to your example, so I'm not sure what the issue is. Perhaps you're not completely mocking the raise_for_status() mechanism?

edit: I was on the wrong branch. Everything above passes, except I only see 6 calls instead of 8. I will look into it later.

@beliaev-maksim
Copy link
Collaborator

@Forty-Bot
I did a modification. Just install from the same branch again, ensure to do uninstall first

@Forty-Bot
Copy link

OK, that fixes it. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.