Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate mock data for tests from specific repositories #11

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

sudiptog81
Copy link
Contributor

Closes #7.

Fetches the last PR status for the repos specified in test/repos.json, determines the expected test results, and saves the responses to test/mocks.json. Added fetch-mocks script to fetch those responses for the first time. On subsequent runs, the evaluation subroutine can be tested against those responses.

package.json Outdated Show resolved Hide resolved
Comment on lines +4 to +6
if (process.env.NODE_ENV === 'TEST') {
return;
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we want to skip logging in tests, as opposed to passing in the logger (and mocking that in tests)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the log messages are polluting the test results with messages like Some checks are pending popping up in betweeen - or we can redirect the stream too?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking that the logger function can be passed down from the top level, and be defaulted to console.log, but that tests can pass something else

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes! that sounds good and makes sense! will do the changes.

@Green-Ranger11
Copy link
Collaborator

I will be jumping into this as well but there is one mock that's going to be hard to generate which are the watch mocks. Should I do everything else and skip that one?

@ljharb
Copy link
Owner

ljharb commented Apr 22, 2022

Sure, let’s start with that :-)

@Green-Ranger11
Copy link
Collaborator

So currently, we fetch from GitHub api and run the relevant tests on those responses.

  1. Get the latest pull request from 5 separate repositories.
  2. Compare evaluatePullRequest to statusCheckRollUp.state on those responses and if they're the same then evaluatePullReqest passes.
  3. Run the other tests such as filterPullRequest, parsePullRequest, evaluateCommitStatus on those responses.

Some issues I have with this implementation:

  1. We don't have full control on the statuses. For example, we won't be able to test for all possible pull request statuses i.e. since only the latest PR is being returned from the 5 repositories we have chosen.
  2. We are getting the response and then checking it with our own implementation. So essentially we don't know the actual status and only check when its returned. Maybe something better would be to know the actual status (as you see on github.com) before and then running our tests on that so that we will know whether our implementation is actually valid.

Which is why I would like to propose that we can set up a dummy repo with set statuses (hopefully as much as possible) which we can test against. Also this will help validate our implementation because we already know beforehand what the status is - instead of checking it only when the response is returned.

@ljharb
Copy link
Owner

ljharb commented Apr 26, 2022

I think a dummy repo makes perfect sense. Want to make it and set it up, and then later you can transfer it to me so it can live alongside this one?

@Green-Ranger11
Copy link
Collaborator

Awesome that sounds like a plan!

@Green-Ranger11
Copy link
Collaborator

One thing I came across which may be a non-issue is that if the user doesn't have an internet connection they may not be able to fetch the mocks and run the tests.

Maybe we should leave the current mocks in as default and then if the user wants to fetch the latest they can do so?
We could also have a script or action that can periodically update the mocks as well on GitHub Actions or something

@Green-Ranger11
Copy link
Collaborator

Or maybe it can just be left as is because the tool is reliant on having an internet connection?

@ljharb
Copy link
Owner

ljharb commented May 13, 2022

My expectation is that the mocks should be committed to git, and updateable only manually by an explicit command; that way, no internet connection is required to run tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add script to generate mock responses for tests
3 participants