Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implemented test for BZ1390833 #4986

Merged
merged 2 commits into from
Jul 27, 2017
Merged

Conversation

abalakh
Copy link
Contributor

@abalakh abalakh commented Jul 18, 2017

https://bugzilla.redhat.com/show_bug.cgi?id=1390833

py.test -v tests/foreman/ui/test_usergroup.py -k test_positive_update_org_with_admin_perms
============================= test session starts ==============================
platform darwin -- Python 2.7.13, pytest-3.0.7, py-1.4.34, pluggy-0.4.0 -- /Users/andrii/workspace/env/bin/python
cachedir: .cache
rootdir: /Users/andrii/workspace/robottelo, inifile:
plugins: xdist-1.18.1, services-1.2.1, mock-1.6.0, cov-2.5.1
collected 8 items
2017-07-18 18:07:22 - conftest - DEBUG - Deselect of WONTFIX BZs is disabled in settings


tests/foreman/ui/test_usergroup.py::UserGroupTestCase::test_positive_update_org_with_admin_perms PASSED

============================== 7 tests deselected ==============================
=================== 1 passed, 7 deselected in 53.89 seconds ====================

@abalakh abalakh added 6.3 QETestCoverage Issues and PRs relating to a Satellite bug UI Issues and PRs involving the UI labels Jul 18, 2017
@abalakh abalakh self-assigned this Jul 18, 2017
@@ -1047,7 +1047,8 @@
"users.current_password": (By.ID, "user_current_password"),
"users.password": (By.ID, "user_password"),
"users.password_confirmation": (By.ID, "user_password_confirmation"),
"users.user": (By.XPATH, "//a[contains(., '%s')]"),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the user tries to search for himself on users page - this locator will return 2 elements, the first one is username at top right corner (user's menu).

@codecov
Copy link

codecov bot commented Jul 18, 2017

Codecov Report

Merging #4986 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #4986   +/-   ##
=======================================
  Coverage   40.05%   40.05%           
=======================================
  Files          88       88           
  Lines        7136     7136           
=======================================
  Hits         2858     2858           
  Misses       4278     4278
Impacted Files Coverage Δ
robottelo/ui/locators/base.py 100% <ø> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update acb712b...5c78c0a. Read the comment docs.

@oshtaier
Copy link
Contributor

Add similar test case for locations to have more or less good coverage for described scenario

Copy link
Contributor

@pondrejk pondrejk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

@abalakh
Copy link
Contributor Author

abalakh commented Jul 19, 2017

@oshtaier addressed.

y.test -v tests/foreman/ui/test_usergroup.py -k test_positive_update_loc_with_admin_perms
============================= test session starts ==============================
platform darwin -- Python 2.7.13, pytest-3.0.7, py-1.4.34, pluggy-0.4.0 -- /Users/andrii/workspace/env/bin/python
cachedir: .cache
rootdir: /Users/andrii/workspace/robottelo, inifile:
plugins: xdist-1.18.1, services-1.2.1, mock-1.6.0, cov-2.5.1
collected 9 items
2017-07-19 15:53:45 - conftest - DEBUG - Deselect of WONTFIX BZs is disabled in settings


tests/foreman/ui/test_usergroup.py::UserGroupTestCase::test_positive_update_loc_with_admin_perms PASSED

============================== 8 tests deselected ==============================
=================== 1 passed, 8 deselected in 51.47 seconds ====================

Copy link
Contributor

@svtkachenko svtkachenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK

@svtkachenko
Copy link
Contributor

Waiting for @oshtaier to comment and merge :)

@rplevka
Copy link
Member

rplevka commented Jul 19, 2017

You'll hate me for this, but i'll try (treat as a non-blocker, just a suggestion ;)):
could we possibly merge these to a single scenario called something like test_positive_update_taxonomy_with_admin_perms and have the org, loc variants covered in a form of the subtests?
We start to gain quite a big volume of similar tests, which perform almost identical steps with small diferencies.
This way, we spend 2 minutes of the compute time. If we merged it, we might be able to get down quite a bit (with less footprint on the satellite side (orgs, users)).

We could also bring the test duration down to much more reasonable values if we used CLI for this scenario which a much greater impact (API+CLI tested at the same time). I know, in the ideal world it would be great to test everything with everything everywhere, but that's simply not possible here. I believe, that with the automation times attacking 20 hours we should start to think how to optimize stuff (make smart compromises).

For the scenarios with not such a high impact, we might be fine with only a single endpoint (the most effective one, unless it's really something ui-related).

I'm talking about the all test cases, not just this particular one.
Thanks!

..discuss ;P

@renzon
Copy link
Contributor

renzon commented Jul 19, 2017

@rplevka:

TL;DR: Agree with you, but I see some problems in subtest + pytest the way we use it today.

I believe the test is formed by three parts:

  1. Setup
  2. Processing
  3. Assertion

For the first, I believe we should use the CLI or API for speed, once it's is not directly related to the test itself. So I agree with you. And this should be on setUp and tearDown methods.

I also agree with you using subtest for merely different combinations. But the issue here is the framework. @jyejare found a bug that would have a negative impact on creating subtests: pytest-dev/pytest#1367; E.g. they don't work with pytest.

On the other hand pytest style code has a lot of tools that would fit better on this case. Instead of factories, we could have fixtures. Instead of those test cases with several combinations, we can use parametrize.

In fact I am using the framework "style" on personal projects to get used to them, checking pros and cons. I put "making some refactoring on some tests using pytest" to show them in a Friday mtg discussion. It is also a compass goal for me.

My 2 cents ;)

@rplevka
Copy link
Member

rplevka commented Jul 19, 2017

thanks, @renzon !
yes, okay, the subtest might not be the best example, however my point was that we should aggregate and leave as small footprint as possible. I personally would be fine with having both (referring to this PR) variants tested in a single testcase one after another. I wouldn't really care. I know, we would need to investigate, which one really failed, but such risk/effort is so small, it's really not worth spending additional org and 2 locs and 1 user just to have them covered separately. Toooo much overhead for a little gain

@elyezer
Copy link
Contributor

elyezer commented Jul 19, 2017

Tools like subTest (or pytest parametrize) is meant to be used on the scenario brought up by @rplevka and yes we should (if not must) take advantage of such tools.

We try to have isolated test cases as possible but there are times when we can inherit the same setUp and test many things so I agree that on cases like this we should reuse the setUp.

@abalakh
Copy link
Contributor Author

abalakh commented Jul 21, 2017

@rplevka

org, loc variants covered in a form of the subtests
it's really not worth spending additional org and 2 locs

The first test creates 1 org and re-uses exiting one, no locs were used; the second test re-uses existing org and creates 2 locs. So there's no duplication in orgs/locs already and using subtest won't decrease existing orgs/locs number. Also as scenarios are a bit different (different set of parameters, attributes and their amount) attempt to combine them in a single loop for subtest most likely will look a bit ugly and i'm not sure what's the benefit as subtest is not supported and not working with py.test (corresponding issue is 1.5 years old and no one is working on it).

For the scenarios with not such a high impact, we might be fine with only a single endpoint (the most effective one, unless it's really something ui-related)

I disagree with this proposal. We have new issues with CLI almost every single build (--organization/--content-view/etc parameters are not accepted, not possible to pass some entity by its name as too many values are found and so on), API has its specific issues too - most often some parameters are missing in GET/POST/PUT output, and UI isn't any different - lots of 'sorry, something went wrong', 500 errors, buttons which do nothing, missing items for user with enough permissions.
While theoretically the same scenario for all the interfaces is overkill, in practice there's lot of differences between them causing interface-specific bugs (worth mentioning that interfaces aren't even in full sync, there's a lot of actions which are possible to do via API but not CLI, or via UI only). I agree 3x tests doesn't mean 3x more bugs found but as we can see it still means more bugs found.

@rplevka
Copy link
Member

rplevka commented Jul 21, 2017

@abalakh
There's no doubt in what you said. But notice the "For the scenarios with not such a high impact" bit.
I'm proposing a compromise here and that always brings some regressions with it.
The inconvenient truth is, we simply start to be unable to afford more tests like these anymore. And that is the main point I'm trying to raise.
just let's admit we can't cover the whole universe and let's start to cover the critical bits in the first place.

Again, please try not to take this personally and try not to map my words on this specific commit.

How about if we started to consider the real impact of the tested scenario feature, and tune the coverage appropriately instead of blindly covering all the possible parameters

@oshtaier
Copy link
Contributor

@rplevka Isolation is probably most important thing that should be present in test automation, so problem should be resolved not on test level, but on hardware end. Let's have better approach for parallel runs and more CPUs/RAM. What do you think?

@rplevka
Copy link
Member

rplevka commented Jul 21, 2017

@oshtaier
..yes - if you can provide a spare environment with a spare satellite box for each test - in a spherical CI environment in vacuum ;)
Otherwise, (in a real life) we must start to make compromises and optimizations on both sides. Yes, we are looking for the ways of distributing the tests to more satellites but that is not sufficient.

@abalakh
Copy link
Contributor Author

abalakh commented Jul 27, 2017

bump. Can we have it merged? :) AFAIK we agreed to move this discussion to separate issue and not block this specific PR.

@jyejare
Copy link
Member

jyejare commented Jul 27, 2017

@abalakh , You have two ACKs and you can merge.

@abalakh
Copy link
Contributor Author

abalakh commented Jul 27, 2017

@jyejare merging own PRs is prohibited :)

@renzon renzon merged commit 953a212 into SatelliteQE:master Jul 27, 2017
@renzon renzon removed the review label Jul 27, 2017
@abalakh abalakh deleted the ui_ugr_admin branch July 28, 2017 19:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
QETestCoverage Issues and PRs relating to a Satellite bug UI Issues and PRs involving the UI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants