Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running tests offline #145

Open
jayvdb opened this issue Mar 9, 2019 · 6 comments
Open

Running tests offline #145

jayvdb opened this issue Mar 9, 2019 · 6 comments

Comments

@jayvdb
Copy link
Contributor

jayvdb commented Mar 9, 2019

I cant find any support for de-selecting the tests which require internet access, such as when run inside a rpmbuild.

test_requirements.py seems to be entirely online tests (5).

  • FAIL tests/unit/test_dependencies.py::test_find_all_matches
  • FAIL tests/unit/test_dependencies.py::test_get_dependencies
  • FAIL tests/unit/test_dependencies.py::test_get_deps_from_json
  • FAIL tests/unit/test_dependencies.py::test_get_deps_from_index
  • FAIL tests/unit/test_dependencies.py::test_get_editable_from_index

Others

  • FAIL tests/unit/test_requirements.py::test_get_requirements
  • FAIL tests/unit/test_requirements.py::test_get_ref
  • FAIL tests/unit/test_requirements.py::test_get_local_ref
  • FAIL tests/unit/test_requirements.py::test_stdout_is_suppressed
  • FAIL tests/unit/test_requirements.py::test_pep_508

I was a bit surprised that test_stdout_is_suppressed and test_get_local_ref were online tests - with minor tweaks they might be assertions which can be made in an offline environment?

Then

  • FAIL tests/unit/test_setup_info.py::test_remote_req[https://github.com/requests/requests/archive/v2.20.1.zip-requests-requires0]
  • FAIL tests/unit/test_setup_info.py::test_remote_req[https://github.com/dropbox/pyannotate/archive/v1.0.4.zip-pyannotate-requires1]

And finally

  • ERROR tests/unit/test_setup_info.py::test_local_req[test_artifact0]
  • ERROR tests/unit/test_setup_info.py::test_local_req[test_artifact1]

I suspect those two test_local_req errors are because environ-config depends on attrs, and it fails during setup of the fixture because of network attempts, but pytest is obscuring the error in the fixture setup. Here is an example where it would be great to know this test should failure if there is no network, so I dont hunt around inside pytest unnecessarily to work out why it is failing ;-)

@jayvdb
Copy link
Contributor Author

jayvdb commented Mar 9, 2019

As people around here probably know, pipenv has a nice automatic deselection in its integration tests, and it would be great to see a solution like that become re-usable.

But an env var is a simple stopgap, and is basic enough that I could contribute if it is acceptable.

@techalchemy
Copy link
Member

To be clear -- you obviously will need the internet or a fully mocked index to actually use the resolver backing requirementslib. As a consequence of doing resolution this also obviously means you will need some kind of index to query while running tests.

Since the whole point is to resolve VCS and file dependencies, I am primarily testing VCS root dependencies. Again, those will likely demand connectivity.

So if your goal is to simply make the light turn green I am not too sure we are on the same page here, since the tests obviously have a purpose. These tests are essentially the main way pipenv is tested these days because the pipenv test suite is large and cumbersome and essentially only tests from an interaction standpoint

For the specifics:

FAIL tests/unit/test_requirements.py::test_get_requirements

Obviously attempts to build requirements of all types, not changing the test because it was literally copied from a pipenv regression test but will add an env var to work around the internet issue possibly?

FAIL tests/unit/test_requirements.py::test_get_ref

This and the other git one could be updated but i'm pretty hesitant to rely only on local paths exclusively. Added a skip for this one

FAIL tests/unit/test_requirements.py::test_get_local_ref

I feel like I was just too lazy to add a submodule here so it just gets cloned from the internet all the time for some reason

FAIL tests/unit/test_requirements.py::test_stdout_is_suppressed

The important thing about this test was that it uses git to clone on the backend, because pip changed an implementation detail and it started writing things to stdout

FAIL tests/unit/test_requirements.py::test_pep_508

this pretty clearly relies on actually getting requirements and package data out of the files it pulls off the internet so again it will have to be skipped

I'm considering completely removing the dependencies module since we reimplemented the entire dependency resolver in various other places, we don't use it for anything currently. That would solve that problem. I'll add skips for those also.

@jayvdb
Copy link
Contributor Author

jayvdb commented Apr 8, 2019

I just need some way to skip the tests which will need the internet, so it can be run in a disconnected rpmbuild environment. A simple marker on these tests would be enough to deselect them. The remaining tests do not validate requirementslib is working 100% correctly -- only that it works at all. If it isnt easy to deselect them, packagers often disable the entire test suite when it fails during an update, and then they dont notice when something breaks.

Longer term, it would be good to pull out the pytest_pypi from pipenv to be a separate project, so the tests can operate against that. I think I saw someone in the pipenv team also started writing a pytest_pypi replacement. IMO it would be useful to have the curated list of 'problematic' packages for pytest_pypi as a separate project (i.e. pipenv's tests/pypi), with explanations of each of the packages and scenarios which need to be tested when testing package installation - a python packaging regression test suite. pip and pipenv both have a good list of scenarios which are known to be problematic, and other packages doing pypa functionality need to be leaning on that expertise to ensure their components are also able to work correctly.

@techalchemy
Copy link
Member

@jayvdb that would be lovely, sadly I work far too much on pipenv related projects as is and my employer definitely doesn't care if I continue spending my time on open source, so that would have to be handled by someone else...

@jayvdb
Copy link
Contributor Author

jayvdb commented Nov 13, 2020

Status update.

.gitmodules has three git dependencies, and those are causing some tests to fail offline. But far fewer failures, and .gitmodules documents the potential online dependencies which is great.

(other openSUSE test errors with no clear relation to this issue: #270 and #280)

@matteius
Copy link
Collaborator

Current failures without internet are:

SKIPPED [16] tests/conftest.py:86: requires internet access, skipping...
FAILED tests/unit/test_requirements.py::test_convert_from_pipfile[requirement10-https://github.com/Rapptz/discord.py/archive/async.zip#egg=discord.py[voice]] - pip._vendor.requests.exceptions.Connecti...
FAILED tests/unit/test_requirements.py::test_get_requirements - AttributeError: 'NoneType' object has no attribute 'url'
FAILED tests/unit/test_setup_info.py::test_local_req[test_artifact0] - AttributeError: 'NoneType' object has no attribute 'keys'
FAILED tests/unit/test_setup_info.py::test_local_req[test_artifact1] - AttributeError: 'NoneType' object has no attribute 'replace'
FAILED tests/unit/test_setup_info.py::test_no_duplicate_egg_info - AssertionError: assert (None or None or None)
FAILED tests/unit/test_setup_info.py::test_parse_function_call_as_name - AssertionError: assert None == 'package-with-function-call-as-name'
FAILED tests/unit/test_setup_info.py::test_ast_parser_handles_repeated_assignments - KeyError: 'name'
FAILED tests/unit/test_setup_info.py::test_ast_parser_handles_exceptions - KeyError: 'requires'
FAILED tests/unit/test_setup_info.py::test_read_requirements_with_list_comp - KeyError: 'requires'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants