Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use an anonymized two-year snapshot of an HGR dataset for testing #7155

Open
jniles opened this issue Jul 20, 2023 · 0 comments
Open

Use an anonymized two-year snapshot of an HGR dataset for testing #7155

jniles opened this issue Jul 20, 2023 · 0 comments
Assignees

Comments

@jniles
Copy link
Contributor

jniles commented Jul 20, 2023

Currently, our tests have three big limitations:

  1. They must be executed serially (they cannot be executed in parallel). This is problematic because it makes tests take a long time, and are exceedingly brittle. If a programmer makes a change in one module, they may spend a long time fixing tests to ensure that they all pass given that a failure can take 20+ minutes to appear. Similarly, they are quite brittle, as creation tests change the read tests.
  2. They only assess the minimum of functionality that BHIMA offers. We do not really test balances - either of $$ or stock - because if a developer changes or adds a test upstream, they will be fixing tests forever downstream.
  3. They do not represent a live system due to how few records are in our test database. In the real hospital, we would have thousands of transactions. The tests therefore do not reflect the reality they are supposed to be testing and we still have to do manual follow-ups to find performance regressions.

In light of all this, I propose that we rewrite our tests to be based on a data snapshot we pull from a hospital that has been using BHIMA long term. We would anonymize personal information (patient names, personalized account names, employees, etc), and take a two-year snapshot that we can manually massage to ensure it doesn't have abherrant data (like negative stock balances). Then we could transition all our tests do perform the read tests first, and the create tests on a subsequent fiscal year.

I think this would be enough to give us more confidence in our tests and ensure that we catch when our data structures change.

Note that this would need to happen after #7024 is merged for best results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant