Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use ci build wheel to build more wheels #76

Merged
merged 14 commits into from May 16, 2021

Conversation

s-weigand
Copy link
Contributor

@s-weigand s-weigand commented May 2, 2021

This PR implements the github action from joerick/cibuildwheel to build and test more wheels (additional wheels are Windows, MacOs, and aarch64) .
It also unifies the workflows python-publish.yml, python-sdist-test.yml and python-test.yml into tests.yml.
Having a single workflow to run all tests allows jobs to require other jobs to pass before running, which can save CI time.
For example, in this workflow the tests only run if the lining passes and deployment only runs if all tests pass.
Another restriction for deployment to run is that the triggering event has to be the push of a tag (see example run on my forks master branch).

Built files
$ ls -la dist
total 2456
drwxr-xr-x  2 runner docker  4096 May  2 17:08 .
drwxr-xr-x 10 runner docker  4096 May  2 17:08 ..
-rw-r--r--  1 runner docker 52085 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-macosx_10_9_x86_64.whl
-rw-r--r--  1 runner docker 64414 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-manylinux1_i686.whl
-rw-r--r--  1 runner docker 62724 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-manylinux1_x86_64.whl
-rw-r--r--  1 runner docker 64417 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-manylinux2010_i686.whl
-rw-r--r--  1 runner docker 62723 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-manylinux2010_x86_64.whl
-rw-r--r--  1 runner docker 64632 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-manylinux2014_aarch64.whl
-rw-r--r--  1 runner docker 46505 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-win32.whl
-rw-r--r--  1 runner docker 50744 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-win_amd64.whl
-rw-r--r--  1 runner docker 53532 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-macosx_10_9_x86_64.whl
-rw-r--r--  1 runner docker 65920 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-manylinux1_i686.whl
-rw-r--r--  1 runner docker 64391 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-manylinux1_x86_64.whl
-rw-r--r--  1 runner docker 65922 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-manylinux2010_i686.whl
-rw-r--r--  1 runner docker 64392 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-manylinux2010_x86_64.whl
-rw-r--r--  1 runner docker 65388 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-manylinux2014_aarch64.whl
-rw-r--r--  1 runner docker 47075 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-win32.whl
-rw-r--r--  1 runner docker 51285 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-win_amd64.whl
-rw-r--r--  1 runner docker 52206 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-macosx_10_9_x86_64.whl
-rw-r--r--  1 runner docker 65042 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-manylinux1_i686.whl
-rw-r--r--  1 runner docker 63822 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-manylinux1_x86_64.whl
-rw-r--r--  1 runner docker 65045 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-manylinux2010_i686.whl
-rw-r--r--  1 runner docker 63824 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-manylinux2010_x86_64.whl
-rw-r--r--  1 runner docker 65082 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-manylinux2014_aarch64.whl
-rw-r--r--  1 runner docker 47096 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-win32.whl
-rw-r--r--  1 runner docker 51091 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-win_amd64.whl
-rw-r--r--  1 runner docker 52964 May  2 17:08 line_profiler-3.2.6-cp38-cp38-macosx_10_9_x86_64.whl
-rw-r--r--  1 runner docker 67184 May  2 17:08 line_profiler-3.2.6-cp38-cp38-manylinux1_i686.whl
-rw-r--r--  1 runner docker 65953 May  2 17:08 line_profiler-3.2.6-cp38-cp38-manylinux1_x86_64.whl
-rw-r--r--  1 runner docker 67188 May  2 17:08 line_profiler-3.2.6-cp38-cp38-manylinux2010_i686.whl
-rw-r--r--  1 runner docker 65953 May  2 17:08 line_profiler-3.2.6-cp38-cp38-manylinux2010_x86_64.whl
-rw-r--r--  1 runner docker 67838 May  2 17:08 line_profiler-3.2.6-cp38-cp38-manylinux2014_aarch64.whl
-rw-r--r--  1 runner docker 47645 May  2 17:08 line_profiler-3.2.6-cp38-cp38-win32.whl
-rw-r--r--  1 runner docker 52063 May  2 17:08 line_profiler-3.2.6-cp38-cp38-win_amd64.whl
-rw-r--r--  1 runner docker 53109 May  2 17:08 line_profiler-3.2.6-cp39-cp39-macosx_10_9_x86_64.whl
-rw-r--r--  1 runner docker 67385 May  2 17:08 line_profiler-3.2.6-cp39-cp39-manylinux1_i686.whl
-rw-r--r--  1 runner docker 66015 May  2 17:08 line_profiler-3.2.6-cp39-cp39-manylinux1_x86_64.whl
-rw-r--r--  1 runner docker 67385 May  2 17:08 line_profiler-3.2.6-cp39-cp39-manylinux2010_i686.whl
-rw-r--r--  1 runner docker 66017 May  2 17:08 line_profiler-3.2.6-cp39-cp39-manylinux2010_x86_64.whl
-rw-r--r--  1 runner docker 67853 May  2 17:08 line_profiler-3.2.6-cp39-cp39-manylinux2014_aarch64.whl
-rw-r--r--  1 runner docker 47837 May  2 17:08 line_profiler-3.2.6-cp39-cp39-win32.whl
-rw-r--r--  1 runner docker 51996 May  2 17:08 line_profiler-3.2.6-cp39-cp39-win_amd64.whl
-rw-r--r--  1 runner docker 35620 May  2 17:08 line_profiler-3.2.6.tar.gz

See last CI run before reactivating branch restriction.
Talking about branch restrictions, how about allowing the tests to run on all push events?
As a contributor, I like to know that the CI passes before I bother someone with a PR that might fail.
With restricting the branches that the CI runs on I only have two options.

  1. Comment out the restriction to later drop that commit
  2. Merge the feature branch into the forks master over and over again

Dropping the restrictions would make it easier for contributors to check that all is fine before making a PR.

Since the linux wheels are built inside of a docker image and paths don't match up with the paths on the host, uploading the coverage is quite hacky.
But it works and maybe someone has a better solution (see)
Building the wheels for Linux could ofc also be done as it was before without the hacks for the coverage, but IMHO saving the mental capacity of thinking about and keeping up with changing build requirements (leaving that to a widely used and specialized project) is worth it.

Added bonus, since kernprof.py isn't spatially tested from source and installation anymore, the coverage went up 8% 馃槃

No emojis this time, sorry for that again 馃槥 .

closes #63

To get the coverage upload on linux to work properly there were some 'hacks' needed since the linux build and tests run inside docker.
Those hack are:
- temporary renaming kernprof.py so tests only use the installed one
- replacing the file paths inside the .coverage files, so they will be valid in the host environment (needed for coverage combine)
- copying the .coverage file to the wheel output directory, so they are available on the host
Since  the  sdist test is manly to ensure that all needed files are included in the sdist, IMHO it is enought to hest it with one python version, since the coplebility test is done by the wheel tests.
@codecov
Copy link

codecov bot commented May 2, 2021

Codecov Report

Merging #76 (1d4a54e) into master (56bd6f9) will increase coverage by 8.36%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #76      +/-   ##
==========================================
+ Coverage   44.63%   52.99%   +8.36%     
==========================================
  Files           5        5              
  Lines         419      417       -2     
  Branches       59       59              
==========================================
+ Hits          187      221      +34     
+ Misses        218      176      -42     
- Partials       14       20       +6     
Impacted Files Coverage 螖
line_profiler/line_profiler.py 42.14% <0.00%> (+0.16%) 猬嗭笍
kernprof.py 73.75% <0.00%> (+24.46%) 猬嗭笍

Continue to review full report at Codecov.

Legend - Click here to learn more
螖 = absolute <relative> (impact), 酶 = not affected, ? = missing data
Powered by Codecov. Last update 56bd6f9...1d4a54e. Read the comment docs.

@Erotemic
Copy link
Member

Erotemic commented May 3, 2021

This looks great. It's certainly using github actions better than I currently have the ability to. I spent a long time trying to figure out how to get artifacts to work before giving up, but it looks like you've got them working here.

To respond to your questions:

single workflow

I don't have any problem with this.

Talking about branch restrictions, how about allowing the tests to run on all push events?

That sounds fine. I'm fairly sure this is the case, but I want to double check. Github secrets will only be exposed in events triggered by pyutils members right? For instance, in this PR if you were adversarial and added echo $PYUTILS_TEST_TWINE_PASSWORD as a CI line (adding whatever interlaced chars were necessary to confuse the regex secret scrubber), that will just be an empty value (whereas if I did that, then I'd have to start rotating secrets). In other words, PRs from non-organization members are run in their own CI-space, so they don't take CI time from pyutils right?

Since the linux wheels are built inside of a docker image and paths don't match up with the paths on the host, uploading the coverage is quite hacky.

I don't understand exactly why this is the case. Are you currently testing in the same environment that is building the wheels? Wouldn't the better thing to do be (1) build the wheel, and upload it as an artifact (2) in a separate environment install the wheel and test it. Testing in the build environment seems like it might not be the best idea.

I also have some other comments:

  • In general I don't like to depend on a CI service to be able to perform a task. I want to always have the ability to run locally so long as I have the appropriate environment available to me. (Hence why run_manylinux_build.sh and publish.sh are scripted the way they are). Now, the benefit of getting osx and win32 wheels far out-weights my preference on this, but do you know if there is or ever will be a way to run "cibuildwheel" locally? I don't always use github (I have several projects on gitlab), and I like to keep the tooling I use as cross-CI compatible as possible. Not sure if there is any to to do that here.

  • I'm not a huge fan of the temp renaming of kernprof in run_tests.py, although I'm not a huge fan the initial chdir I had in there. I'm not sure if there is a better way to do it.

  • Is there a reason test_version_agreement was removed? I presume because of issues with python being in the PATH. Could we make that work by just importing line_profiler and kernprof and testing their __version__ agree?

  • Is there a way to only make the "deploy" script run on a tagged push to the "release" branch? Typically the way I've settled on deploying packages is by merging everything to release into master and then git push origin master:release to trigger the CI-release script.

@s-weigand
Copy link
Contributor Author

s-weigand commented May 3, 2021

As for your questions:

  • Branch restrictions

    • Access to Secrets
      Repository and organization secrets can't be accessed by pull requests from forks (see this explanation)
      For organization secrets, even if the PR is created from a branch in the same repository, the initiator needs to be a member of the organization with the proper rights to read them.
      In one project I work on, we use a workflow that validates that the groups used in the CODEOWNERS file exist in the orga, which needs an organization secret to have access to read the existing groups. Each time dependabot creates an update PR that workflow fails since dependabot isn't a member of the orga.

    • CI time
      CI runs on a fork will use the CI time of the fork's owner. I don't know about the billing model pyutils uses, but public repos have unlimited CI time, with a max 20 workers in parallel limitation, as far as I know.

  • Building and testing inside docker
    For Linux cibuildwheel does it like this for each many linux image:

    1. It builds the wheel
    2. It installes it in a new venv in the /tmp folder
    3. It runs the tests
      Thus each wheel is tested on a machine where it is appropriate to be installed.
      As I understand the current test workflow, only the wheel pip sees fit to be installed on the host machine gets tested. E.g. the i686 would be ignored on the amd64 host.
      python -m pip install $BDIST_WHEEL_PATH[all]
  • Using cibuildwheel on different CIs
    cibuildwheel supports different CIs, including GitLab. Since it is a Python package and the action is mostly syntax sugar, you can also run it locally, given that docker is installed [1].

    !!! tip If you have Docker installed, you can locally debug your cibuildwheel Linux config, instead of pushing to CI to test every change. For example:

  • Temp renaming
    Yeah I'm also not a fan of this, my guess is that it has something to do with ubelt (which I saw the first time) and picking up the original PYTHONPATH. But simply deleting it as you did for the sdist tests

    rm -rf line_profiler
    rm -rf kernprof.py

    Isn't an option (initially tried that), since then it would be missing when building the next wheel for that architecture.
    Refining the tests to not pick up the source files, might be a task best left to a different PR 馃槈

  • test_version_agreement
    Ups, that is an artifact from when I initially tried to get the tests to pass. The problem was that python didn't refer to the python version the tests script was run with, thus the tests failed. Adding it back with sys.executable instead of python. (good catch 馃槃 )

  • Only run deploy if the tag is on the release branch
    Sadly, I don't know of a way to check for tag and branch name at the same time, since github.ref changes depending on if the triggering event was a pushed branch or tag

    The branch or tag ref that triggered the workflow run. For branches this in the format refs/heads/<branch_name>, and for tags it is refs/tags/<tag_name>.

    But if you prefer that I can make the deploy job trigger on pushes to the release branch, rather than the push of a tag.

@Erotemic
Copy link
Member

Erotemic commented May 3, 2021

Thanks for answering those questions. That gives me a better understanding of how external users can interact with this repo and how cibuildwheel is working.

I'm the author of the ubelt library. It's one of my phd babies (and I'm always eager to evangelize it). The function ubelt.cmd is a wrapper around subprocess.Popen that has support for teeing the stdout/stderr so you can capture what's happening while its also being printed to the screen.

I think I would prefer the trigger to be the push of the release branch. Something that the CI currently does not do, but I would like to have it do, is to automatically create and push the tag when it is deploying. I've done this before, but I think I put the wrong secret or something. No need to worry about it in this PR.

Overall, this looks like it is complete. Before I merge, I'm going to take some time to really dig in and try to understand everything, test building the wheels locally, and also test pushing to the pypi test server.

Thank you for all your hard work, this has been incredibly helpful! I will merge soon! 馃コ

@s-weigand
Copy link
Contributor Author

Ofc wasn't saying anything against ubelt, just that one might need to pass env with changed values for PYTHONPATH and/or PATH 馃槄

If you want to dig into how cibuildwheel works internally this is the code I looked at when trying to figure out how to get the .coverage files ut of the docker.

As for releases, I personally like the workflow of just using the releases github web UI and have it create the tag for me and the CI/CD takes care of all the rest. I like having all that done in a single interaction (fewer things for me to forget 馃槄 ). But ofc this is just personal preference.

@Erotemic
Copy link
Member

@s-weigand

The only thing I think I don't understand is what file joerick/cibuildwheel@v1.10.0 does and points to. I imagine it must be something on https://github.com/joerick/cibuildwheel in the contents of .github/ but does it use add both update-dependencies.yml and test.yml? Does it do anything else that I'm not seeing?

Everything else looks really good though, and I'll merge as soon as I figure out the answer to the above question.

@s-weigand
Copy link
Contributor Author

@Erotemic

joerick/cibuildwheel@v1.10.0 means it uses the v1.10.0 tag of the GitHub action from https://github.com/joerick/cibuildwheel.
When a GitHub repo has an action.yml it can be used as a github action.
There are 3 kinds of gh-actions docker (only runs on linux), javascript and composite.
In the case of cibuildwheel it is a composite action, which basically is a shorter syntax to run a collection of steps.

In this case, it installs and runs cibuildwheel with the appropriate options and shell (pwsh is the cross-platform version of powershell, this ensures that the paths are handled correctly).

@s-weigand
Copy link
Contributor Author

I updated joerick/cibuildwheel@v1.10.0 to joerick/cibuildwheel@v1.11.0 and added a config for dependabot to check for new GitHub action version each Friday and make an update PR if there is a new version. If you prefer a different schedule or want dependabot to also take care of the python packages, that could also be configured.

@Erotemic
Copy link
Member

Ah, I knew there must have been a file it was linking to, I just couldn't figure out what it was. I must have just kept glancing over action.yml when I was looking, but now that makes sense to me.

The dependabot schedule seems reasonable. If you want to add checks for Python packages in another PR, I'll accept it. I think it's a nice-to-have, but maybe not strictly necessary.

Anyway, thank you again for all this hard work. I'm going to merge this, and attempt to publish shiny new wheels.

@Erotemic Erotemic merged commit 06af7ed into pyutils:master May 16, 2021
@s-weigand s-weigand deleted the use-ci-build-wheel branch May 17, 2021 07:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants