Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test standard to pull requests #311

Open
jaron-l opened this issue Oct 13, 2021 · 10 comments
Open

Add test standard to pull requests #311

jaron-l opened this issue Oct 13, 2021 · 10 comments

Comments

@jaron-l
Copy link
Contributor

jaron-l commented Oct 13, 2021

@mrooney this issue is mostly just for a discussion ground for how we might want to proceed with this.

When reviewing some of @burkematthew's recent pull requests, I find that I frequently am requesting pull requests to have some sort of unit test to verify functionality. I know testing can be a hard thing for those who are unfamiliar with it so I want to discuss what kind of standard we want to use for this repository to make it easier for contributors to know how to write tests and what kind of tests we are looking for.

There are two main reasons that I want unit tests to verify functionality:

  1. Helps maintain stability for the package as tests can catch regressions in the code base.
  2. Makes merging pull requests a lot easier as I don't necessarily always need to pull the new code and test it before I feel comfortable approving it. If the test verifies that the new feature works in roughly the same I would manually, then I feel comfortable trusting the test. This reduces burden on maintainers and makes merging pull requests faster.

If we want to set a standard that pull requests need to have unit tests, we should probably document our expectations somewhere that contributors will see it. The most obvious place would be to put it in a CONTRIBUTING.md file. Another good place would be to create a pull request template with a checkbox task.

It might be good to document some general guidelines for these tests. One such guideline that immediately comes to mind is that unit tests should test the data manipulation or api layer of mintapi and not the sign in workflow. I feel like adding tests to the sign in workflow would make those tests non-deterministic as we don't actually control the sign in workflow and the code can experience different workflows in a non-repeatable manner. This would cause the tests to fail on pull requests in such a way that has nothing to do with the pull request.

If you have any thoughts, comment them on this issue and I can work on drafting up a PR if needed.

@mrooney
Copy link
Collaborator

mrooney commented Oct 13, 2021

I agree! I think it will be a pain at first because it isn't written in the most testable way, but that can be rapidly improved. We should probably tackle #312 first so there's less to migrate. I'm happy to work on that as well once the configargparser is merged.

@ScottG489
Copy link

Do you have any thoughts on testing methods to prevent regressions due to changes by mint that are outside of our control?

@jaron-l
Copy link
Contributor Author

jaron-l commented Oct 14, 2021

@ScottG489, that's what I was referencing in my idea about general guidelines. I don't think the sign in workflow, which is where almost all of the change by mint have affected us, should be tested in CI. It would probably be good to create a set of manual tests that could help debugging that would be executed when developing the sign in workflow, but I think automatic testing would be more of a hindrance and would provide little value due to it being non-deterministic.

You can actually see that I started on this a little while ago. I provided a way for some tests, if the test_args.json is provided, could be executed manually but are ignored by CI. I think that might be a good place to extend those tests when working on the sign-in workflow, though the emphasis to ask from contributors would be the automatic tests described above.

@mrooney
Copy link
Collaborator

mrooney commented Oct 14, 2021

I agree that running on every build is probably not the right approach, but it does raise a good point that integration tests for the most common issues (login) would be nice, especially since it doesn't require any banking data. We could certainly create an empty testing account on Mint that doesn't have any accounts hooked up, and put the credentials in Github Secrets on the project/repo, that some kind of nightly action could run to at least verify login works.

@ScottG489
Copy link

ScottG489 commented Oct 14, 2021

One big hurdle here that comes to mind is the unpredictability of the login sequence. It seems like it would be hard or impossible to test every sequence much less every screen. Even if we did write something to automatically update the test users login settings it doesn't seem to reliably reflect what login prompts users get.

I know I've personally experienced and seen others mention that they've opted for SMS MFA and still received email login codes. I've also experienced a number of other inconsistencies that I mention in this comment:

I've seen all kinds of combinations of login screens. I've been prompted for a token before email and after email, passwords before and after email prompts, and sometimes account selection prompts; all without me having touched my settings at all.

More on topic to this issue, do you think it would be a prerequisite to this issue to lay some groundwork for testing? For instance, making the code more unit testable and having at least a few basic unit tests for existing functionality that PR submitters could use for a frame of reference (perhaps the existing tests already cover this, though).

@burkematthew
Copy link
Collaborator

I think building out a CONTRIBUTING.md would be a great idea and something that would've saved y'all some time as you were\are reviewing my first contributions (and those of any new contributors).

One thing I think that could help as well is if there is a review of all the current tests and see what is missing, if any. That way new contributors have more examples to go off of and help isolate when problems arise. I think that I thought the tests weren't necessary to include with the PRs because there aren't that many tests at the moment. But I certainly know now.

(Also, I'll be getting to the tests on the outstanding PRs soon. Started a new job this week, so it's been a little hectic.)

@jaron-l
Copy link
Contributor Author

jaron-l commented Oct 15, 2021

@ScottG489 I think you are exactly highlighting why I think the login workflow should not be our primary focus when asking/building more tests. The login workflow space (the dimensions of all possible ways that a user could login) is not predictable nor repeatable. If we cannot repeat workflows, then we cannot use the tests to show that a PR or current code doesn't break the login workflow. So what difference would we expect from our tests than from the users doing the same thing and reporting errors when they see them? Would we create a massive network of dummy users with different settings constantly testing logins to simulate a real set of users? And require that they pass for a period of time? Seems to be just simulating what we already have naturally in our users.

I do agree that we should create a few issues/PRs to address the lack of unit testing; however, this is sort of a chicken-and-the-egg thing. Do we want standards for our tests so we can write them or do we write tests so we can write standards based on them so others can follow? I think we should just go ahead and start a minimal CONTRIBUTING.md and follow up with test PRs that try to follow the standards.

@burkematthew, I think what you reference will naturally come about as we write tests. A common way of analyzing what tests are missing is by calculating code coverage and publishing it as a badge. We could also set a standard for coverage required by PRs. However, I don't think this work should hold up writing test standards, it simply helps us to identify what tests to write.

@burkematthew
Copy link
Collaborator

Part of this discussion\CONTRIBUTING.MD should include explicit instructions about integration tests using test_args.json I can certainly work through those details, but I want us to discuss the proper procedure on using the integration test and how we can better incorporate that into the CI tests.

@burkematthew
Copy link
Collaborator

@jaron-l how do we want to move forward here? I want to setup integration with codecov and implement that as part of our GitHub Actions flow, which I think will play in nicely here as we can require certain benchmarks in terms of coverage deltas.

I can get codecov implemented first and then we can build out these standards?

@mrooney
Copy link
Collaborator

mrooney commented Dec 13, 2021

@burkematthew it seems great to add the features in Github, and then we can certainly add some check that doesn't let it get lower on new PRs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants