Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REFACTOR] Split test classes into two: TestXXXNoRequest & TestXXXRequest #3407

Closed
harshil21 opened this issue Dec 3, 2022 · 8 comments · Fixed by #3426
Closed

[REFACTOR] Split test classes into two: TestXXXNoRequest & TestXXXRequest #3407

harshil21 opened this issue Dec 3, 2022 · 8 comments · Fixed by #3426

Comments

@harshil21
Copy link
Member

Currently each of our test files has about one TestClass, which tests everything about the class together. This is not optimal because:

  1. Some tests in the class make actual requests to Telegram, thus slowing down development if you've made substantial changes to the class, but don't actually want to test the request part yet.
  2. It makes the tests slightly more readable, bearable and just a bit more organized.

Implementation

For example, test_sticker.py should now look like:

class StickerSpace:  # namespace class
     emoji = "💪🏽 "
     ...

@pytest.mark.no_req   # for faster development cycle
class TestStickerNoRequest:
    def test_de_json(...):
        ...
    def test_a_mock_request(...):
        ...

@pytest.mark.req  # don't think we need this, but let me know if you'd like this
@pytest.mark.flaky(3, 1)
class TestStickerRequest:
    def test_custom_emoji(...):
        await bot.get_sticker_set(...)

Along with this change, I'll also try to simplify/shorten some tests, if possible.

@Bibo-Joshi
Copy link
Member

Interesting idea. For "substantial changes but not testing the request part" you probably mean something like first testing that the immutability logic in #3249 works on an "internal" level before checking if it introduces any issues with the Bot AP?

The downsides I see of splitting everything up into several classes are

  • probably tideous work to split everything up
  • While writing tests one needs to remember what goes where
  • Increasing scrolling time

An additional solution that came to my mind would be to monkeypatch HTTPXRequest.do_post to pytail.xfail("Not running tests that make a reuquest"). I guess it should be possible somehow to make that patch dependent on some flag that can be passed to pytest

@harshil21
Copy link
Member Author

Interesting idea. For "substantial changes but not testing the request part" you probably mean something like first testing that the immutability logic in #3249 works on an "internal" level before checking if it introduces any issues with the Bot AP?

can also be helpful in that case, yeah. Sometimes you want test a couple of (non-request) components together, but since they are mixed with longer taking request tests, it slows down development time. The current solution to this, adding a @pytest.mark.dev to tests you want to test separately also increases development time (cause you have to add and then remember to remove it) and negates its actual purpose.

  • probably tideous work to split everything up

It's a mostly one time thing and since each test doesn't depend on other tests it shouldn't cause issues.

  • While writing tests one needs to remember what goes where

The idea is when you open the file, it becomes instantly clear what should go where. I'll try to maximize this as much as possible.

  • Increasing scrolling time

I'd say it would decrease scrolling time. For example in a new API update, you'll have to currently scroll everywhere to modify existing tests (specially when you don't know exactly what to change). This is mostly happening cause it's not ordered. E.g. test_eq is usually at the end of the file, but test_de_json is usually at the beginning.

An additional solution that came to my mind would be to monkeypatch HTTPXRequest.do_post to pytail.xfail("Not running tests that make a reuquest"). I guess it should be possible somehow to make that patch dependent on some flag that can be passed to pytest

maybe that could work as well, but it violates point 2. in the OP. It's daunting right now to check how the tests work, and this would introduce some hidden logic.

@Bibo-Joshi
Copy link
Member

I'd say it would decrease scrolling time. For example in a new API update, you'll have to currently scroll everywhere to modify existing tests (specially when you don't know exactly what to change). This is mostly happening cause it's not ordered. E.g. test_eq is usually at the end of the file, but test_de_json is usually at the beginning.

Okay, this probably depends on the specific task. If I'm adding a new class, I imagine that sorting by request/no-request leads to me scrolling up & down quite a bit b/c I'm not necessarily adding the tests in that order. I may be proven wrong :)

maybe that could work as well, but it violates point 2. in the OP.

Point 2 could be achieved at least partially by sorting the tests within the same class though, right? We could add a notable comment break between some, something like

# <---------------------------------------------------------------------------->
# Tests Below this line make requests to the Bot API
# <---------------------------------------------------------------------------->

Though I agree that skipping all the flaky decorators would indeed be cool :D

It's daunting right now to check how the tests work, and this would introduce some hidden logic.

I'm not sure if I correctly understand what you mean by the first part of the sentence.
For the hidden logic, I must say that for anyone who hasn't spent a lot of type with pytest, @pytest.mark.{no_req, req, dev} are IMO little more explicit. Either way you'll have to pass a parameter to pytest and just by seeing the decorator @pytest.mark.no_req, I don't think it's obvious what that's for and how it can be used.


Another general question/comment: This would only make sense for the classes in the telegram module, right? B/C for tg.ext we mostly don't make any requests at all and if we do (I think we do at least a few times in testing Application) the main intent is not to test the API functionality, but how it interates into ext 🤔

@harshil21
Copy link
Member Author

I'm not sure if I correctly understand what you mean by the first part of the sentence.

I mean new users, who are not too familiar with pytest have a hard time understanding what's going on, so some explicitness would help them more.

just by seeing the decorator @pytest.mark.no_req, I don't think it's obvious what that's for and how it can be used.

I'll also add a short comment next to each of these explaining how to use them.

Another general question/comment: This would only make sense for the classes in the telegram module, right?

yep.

@Bibo-Joshi
Copy link
Member

just by seeing the decorator @pytest.mark.no_req, I don't think it's obvious what that's for and how it can be used.

I'll also add a short comment next to each of these explaining how to use them.

I wouldn't like that tbh. The comments would have to be repeated many times and may easily be forgotten when writing new tests. Plus, it's only really helpful once for each new person looking into the tests (which isn't that often).
What would you instead think of adding a README to the tests directory that gathers some useful insights on how the tests work and what goes where? We could also make it a wiki page (for easier editing) and have the readme point there. The contrib guide could also point to that file.


In total I'll vote +1 on adding an option to exclude requests-based tests (as I can see the benefit of that) and -0 on splitting tests into different classes for that (as I'm not entirely convinced of the benefit of that).

I'd like to hear the opinion of the other team members before any PR is opened for this.

@harshil21
Copy link
Member Author

What would you instead think of adding a README to the tests directory that gathers some useful insights on how the tests work and what goes where? We could also make it a wiki page (for easier editing) and have the readme point there. The contrib guide could also point to that file.

Yeah that's a better idea, I like it

@Poolitzer
Copy link
Member

I really like the approach of having a request and non request class to be fair.

I can see the point about this being a lot of work to do for all tests at once, I neither see the point about more scrolling (I have to scroll a lot now anyway, and I usually do all the non request parts first before others for new Types in API updates), nor about "needing to remember", right now I need to remember to add flaky and nothing warns me if I forget it. Having this split up in two classes with one decorator will decrease repetition and make it way easier to catch if someone put a request test where it doesn't belong.

+1 from me for implementing this.

@lemontree210
Copy link
Member

What would you instead think of adding a README to the tests directory that gathers some useful insights on how the tests work and what goes where? We could also make it a wiki page (for easier editing) and have the readme point there. The contrib guide could also point to that file.

I like this idea. It's funny coming from me because I have never looked for a README in a tests dir before 😆 But it could work especially if there is a link from the contrib guide.

@github-actions github-actions bot locked and limited conversation to collaborators Feb 19, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants