Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade the packaging setup #642

Open
webknjaz opened this issue Oct 30, 2021 · 219 comments · Fixed by #779
Open

Upgrade the packaging setup #642

webknjaz opened this issue Oct 30, 2021 · 219 comments · Fixed by #779
Assignees
Labels
Task Tasks & chores related to proxy.py

Comments

@webknjaz
Copy link
Contributor

I see that currently setup.py is being used and called directly. This is highly discouraged and is deprecated. See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for more details.

It's quite easy to upgrade it to use PEP 517 with invocations via pypa/pip and pypa/build.

If you're open to improvements, I could try to find some time to contribute some packaging and CI/CD-related PRs.

@abhinavsingh
Copy link
Owner

@webknjaz Will highly appreciate PR for these, I could learn new things :). I am stuck in classic vanilla setup.py workflows and seemingly things have evolved a lot in this aspect. Thank you 👍

@webknjaz
Copy link
Contributor Author

Woah! Among other problems, there are no wheels uploaded for the last versions on PyPI (since 2019!)

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 1, 2021

First improvement done: #647.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 2, 2021

@abhinavsingh I've submitted a number of the linting setup maintainability improvements. The last one includes everything but I thought it would probably be easier/more digestible if the building blocks were their own atomic PRs:

This setup is rather strict but has a lot of excludes in the config because there's a lot of the linting violations that get revealed otherwise. I think it'd make a nice series of "good first issues" for beginner contributors to improve.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 2, 2021

+1 on top #658

@webknjaz

This comment has been minimized.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 3, 2021

Uncommented another fixer on top: #661

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 3, 2021

And here's two more improvements:

I'm starting to hit merge conflicts so I'll pause for some time to give you a chance to get these merged before I add more.

@abhinavsingh
Copy link
Owner

@webknjaz Yes naming shouldn't be an issue. My original intention here was to surface enough context via job names. Checking on the PRs.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 4, 2021

Yeah I see that. But currently you repeat the same context in many places. Job names show up right after the workflow names in all of the UIs. Plus adding the project name is not particularly useful since when you look at the CI pages, it's visible what repo they are in.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 4, 2021

@abhinavsingh

This comment has been minimized.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 4, 2021

@abhinavsingh this is interesting... are you on macOS? Locally, I run Gentoo Linux and GHA and pre-commit.ci both use Ubuntu, I think. But yeah, pylint is known for behaving inconsistently sometimes, according to the env it runs under (including the Python version because some of its checks tend to be semi-dynamic as opposed to flake8 which is fully static). It can be a little annoying. I'll see what I can do. Worst case, we can disable the specific rule.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 4, 2021

pylint-dev/pylint#4759 (comment) seems to suggest that Python shipped by homebrew tends to be problematic. Usually, I use pyenv (for many reasons) — it allows me not to mess with the system installation of CPython which may be patched or influence various components in my system. I specifically use the userspace installation so it all lives in my ~/.pyenv and even if I destroy it completely, it won't affect important parts of the system. I can delete and recreate it any time + it allows me to have as many interpreter versions as I like.

@abhinavsingh
Copy link
Owner

Yep, I am on macOS. Lemme give it a try locally with pyenv. We might need to highlight this fact somewhere for macOS contributors or may be simply relax pylint for socket classes (which kind of is covered via mypy)

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 4, 2021

Well, we could apply inline disables for those cases. Or maybe add a transform plugin https://pylint.pycqa.org/en/latest/how_tos/transform_plugins.html. But I'd argue that this is a band-aid and would just shadow the real issue of having a bad CPython env, and people should be encouraged to fix their dev envs (which is probably what this false-positive really reveals).

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 4, 2021

So here's some job/workflow name updates making them mostly fit in the UI:

@abhinavsingh
Copy link
Owner

abhinavsingh commented Nov 4, 2021

@webknjaz I checked random repositories on Github and looks like along with setup.cfg , repos are also adding a minimal setup.py

from setuptools import setup

setup()

See https://github.com/search?q=filename%3A.pyup.yml+setup.cfg&type=code for reference. I searched this to understand why pyup broke after setup.py removal. This seems the likely reason. Wdyt?

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 4, 2021

setuptools supports not having setup.py for years now. I've been deleting it from all my projects since then. The only real problem was that pip install -e didn't work without it, this is why many projects adopted a minimum file stub. But that's been fixed in the recent releases so it's not a concern anymore. The concern is providing a way to bypass PEP 517 by having setup.py in the project.

As for pyup, I don't know. You could probably configure it to exclude setup.py from its checks. It's a bad idea to pin the direct deps in package meta anyway.

@abhinavsingh
Copy link
Owner

Thank you, makes sense. I haven't kept up with changes of late.

I also observed codecov integration is broken. Looking at the GHA logs, upload coverage step fails with "no coverage file found". Though, I see coverage files being generated just fine.

Any hints what might have gone wrong here. Root directory context doesn't change, so .coverage should still be auto-discoverable.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 4, 2021

Here's a doc on the config file: https://pyup.io/docs/bot/config/#specifying. Or you could just migrate to Dependabot like others did.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 4, 2021

I think that the official codecov GHA looks for an XML report that pytest does not currently produce. I was going to fix it by add --cov-report=xml but haven't got to it.

@abhinavsingh
Copy link
Owner

Here's a doc on the config file: https://pyup.io/docs/bot/config/#specifying. Or you could just migrate to Dependabot like others did.

Will deprecate pyup for dependabot. I was just curious about what happened that it broke.

I think that the official codecov GHA looks for an XML report that pytest does not currently produce. I was going to fix it by add --cov-report=xml but haven't got to it.

Yep this probably should fix it :). Thank you

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 5, 2021

@abhinavsingh
Copy link
Owner

@webknjaz I realized that flake8 and pylint now use 100% of all available cores. This typically happens for 5-10 seconds and is noticeable due to increase in fan speed. I am wondering if there is something we can (must) do about it. Thank you.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 5, 2021

Oh, this reminds me that I forgot to increase the number of processes for pylint. flake8 already has auto set by default but pylint's config just sets a single job.
It's usually best to use all the cores available to have better responsiveness. This will mean faster CI job and quicker results on the dev laptops.

@webknjaz
Copy link
Contributor Author

webknjaz commented Nov 5, 2021

@webknjaz
Copy link
Contributor Author

^^^ I removed _compat sometime back, as IS_WINDOWS was moved within constants.py and required elsewhere. But unsure where the heck this value has got cached :)

Ah, so that's a totally different problem. The apidoc extension generates RST files under docs/pkg/, it doesn't know if a corresponding .py file stopped existing. To truly clean that up, you could use git clean on that directory, for example (the RST documents are gitignored).

@webknjaz
Copy link
Contributor Author

This is clearly visible in the output. Both

checking consistency... /Users/abhinavsingh/Dev/proxy.py/docs/pkg/proxy.common._compat.rst: WARNING: document isn't included in any toctree

and

/Users/abhinavsingh/Dev/proxy.py/docs/pkg/proxy.common._compat.rst:2: Spell check: compat: ["compact", "cowpat", "Compaq", "combat", "compete", "comport", "compote", "compute", "comped", "com pat", "com-pat", "comp at", "comp-at", "compost", "company", "compare", "comps", "comp", "compass", "computer", "comet", "compo", "commit", "comp's"]: proxy.common._compat module.

Point at /Users/abhinavsingh/Dev/proxy.py/docs/pkg/proxy.common._compat.rst so you have the exact file to delete right there.

@abhinavsingh
Copy link
Owner

Thank you, feels so silly now. Correct, it's under doc folder :(

@abhinavsingh
Copy link
Owner

Ended up adding a rm docs/pkg/*.rst command within make clean workflow. This is run before most of the make targets locally. I guess we need not keep any RST cache. This can only cause issues locally.

@abhinavsingh
Copy link
Owner

Hi @webknjaz Hope all is well at your end. Yesterday I was trying to bundle dashboard assets in whl and tar.gz. While I was able to get it working using setup.py (by customizing build_py invocation), I am unsure how to go about it with build_meta based backend.

I think we need something similar to how setuptools_scm actually works for write_to invocation. I guess, we want to have a custom invocation before build backend copies SCM content for packaging.

Wondering if there are any go-to recommended ways to implement this?

TL;DR -- We simply need to run npm install and npm run build before pip packaging.

Thank you

@abhinavsingh
Copy link
Owner

Never mind, looks like, I had to override the sdist phase, which fits well with built_meta backend too :)

@webknjaz
Copy link
Contributor Author

webknjaz commented Jan 11, 2022

Alternatively, you could have an in-tree PEP 517 build backend that wraps the setuptools' one — this way, you can avoid having setup.py and the automation would happen outside of setuptools. https://github.com/ansible/pylibssh/blob/206c222/pyproject.toml#L16-L17 + https://github.com/ansible/pylibssh/blob/devel/bin/pep517_backend/_backend.py#L190-L192 — and the python -m build invocation is then exactly the same. Just wrap build_sdist prepending the npm invocations.

@abhinavsingh
Copy link
Owner

abhinavsingh commented Jan 11, 2022

Bingo :) Couldn't wrap my head around this. pyproject.toml documentation is also sparse or at-least lack examples.

@webknjaz
Copy link
Contributor Author

This is because the in-tree backends usually target a narrow use-case, and support for them was added to PEP 517 later. Also, using the backend-specific mechanisms is not wrong. It's just my preference to avoid this. Plus, if you ever want to switch to things other than setuptools, you'd need to port the config to a different backend while with your own in-tree backend, it's more portable.

@abhinavsingh
Copy link
Owner

Hi @webknjaz , I think we are almost there. Sorry for delaying the formal release. Probably I can release 3.4.0 and continue with 3.4.x series. But there is something I want to wrap before doing that. Among that is docs & changelog.

For docs, I wish I don't have to write separate markdown files. But instead, I want our doc parser to get content out of class or module descriptions. Example, let's consider the following hypothetical file example.py.

$ cat example.py
"""
  Module documentation, which can go in detail explaining why this module exists, how to use it in standalone fashion etc.
"""

class Example:
    """Class level doc."""
    pass

Can we have a process to generate EXAMPLE.md out of module level documentation? Then we can link them as top user-level doc. While class / function docstring serves as internal API docs.

Main motivation here is to avoid keeping docs outside of the source files.

@webknjaz
Copy link
Contributor Author

There are different kinds of docs. The autogenerated docs section already does that: https://proxypy.readthedocs.io/en/latest/pkg/proxy.core.work/#module-proxy.core.work.
But for things you want to explicitly expose as public API, use https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#directive-automodule — this example shows RST but in MD you'd need to use the MyST syntax to wrap that with eval-rst.
Basically, you need proper documents for different audiences as explained in https://diataxis.fr. Create document structure and inject something like https://myst-parser.readthedocs.io/en/latest/sphinx/use.html#use-sphinx-ext-autodoc-in-markdown-files where needed.

@webknjaz
Copy link
Contributor Author

webknjaz commented Jan 16, 2022

FWIW I think you should start by (1) splitting README into separate documents and (2) moving the copyright boilerplate out of the docstrings. Maybe with that, it'd become clearer how to structure the docs. And then, maybe you'd use the .. automodule:: directive in some places.
I must note that all the attempts to put the docs into Python modules fully I've seen have failed. Most projects ditched this idea, at least partially, because of a number of maintenance problems it causes. It is fine for the API docs (Reference in the terminology of Diátaxis) but for other types, it just stands in a way.

@abhinavsingh
Copy link
Owner

I must note that all the attempts to put the docs into Python modules fully I've seen have failed

Can you expand on why do they fail. What's unique about keeping docs out of source code that will makes it click.

How I look at it is, keeping doc/source separate is an overhead. As a natural habit, I always write doc-strings within source files, including examples/abstracts etc

@abhinavsingh
Copy link
Owner

abhinavsingh commented Jan 17, 2022

.. automodule::

Thank you will give this a try. I want to see how to cut out separate markdowns based upon module doc string. Then class/function level doc-string continues to serve as internal API doc, while module doc-string can act as end user doc.

@abhinavsingh

This comment has been minimized.

@abhinavsingh

This comment has been minimized.

@webknjaz
Copy link
Contributor Author

it doesn't seem to respect the custom build backend settings

That may happen if pip is quite old IIRC.

@abhinavsingh
Copy link
Owner

That may happen if pip is quite old IIRC.

In my case, it was just a matter of overriding build_wheel too along with build_sdist. Due to which

  1. pip install /local/path/to/repo (worked)
  2. pip install git+http:// (didn't worked)

@webknjaz
Copy link
Contributor Author

Oh, so you needed to solve building wheels from source without having an intermediate sdist first. Makes sense. I would probably also take into account pre-built files (for building from an sdist that already contains these) — so this step could be skipped if the cache is valid.

@abhinavsingh
Copy link
Owner

abhinavsingh commented Mar 13, 2022

@webknjaz For sometime this has started to fail. Have you experienced the same across other projects?

Screen Shot 2022-03-13 at 10 23 28 PM

Looks like this probably was triggered after recent myst_parser upgrades. For now I have pinned it to previous version here #1104

@abhinavsingh
Copy link
Owner

Today I am trying to make a release because we had a fix in 2.4.0. For some reason flake8 has started to fail :) due to crazy bandit related errors. Unfortunately, unable to reproduce it locally when I ran tox -e lint, though I was able to find references to similar errors here tylerwince/flake8-bandit#21 . This issue talks about updates to the wemake-python-styleguide lib but I see we have used ~= marker, so version updates to upstream shouldn't have been an issue.

I am reaching out to see in case you have already ran into this issue elsewhere. Worst, this is not reproducible locally, I tried rm -rf .tox && tox -e lint, works wonderfully well locally

Screen Shot 2022-03-13 at 11 15 06 PM

@abhinavsingh
Copy link
Owner

Looks like removing ~ did the right thing. Due to dependency tree into bandit this was necessary. Updated here #1107

@webknjaz
Copy link
Contributor Author

Sorry, I'm a bit preoccupied with the russian fascist terrorists invading my country (see https://github.com/vshymanskyy/StandWithUkraine for more information) and trying to help my mom and one of the brothers get out of the harm's way.

But when I'll have a minute to distract myself from that, I'll try to add comments regarding the problems you've been seeing.

@abhinavsingh
Copy link
Owner

@webknjaz No worries at all Sviatoslav. We all stand in support with Ukraine. I'll also add the badge within our repo here.

PS: I was able to resolve the errors (by upgrading build deps) and made a release yesterday.

@abhinavsingh
Copy link
Owner

@webknjaz Hi Sviatoslav Sydorenko, hope you are doing well. I recently observed the dist step has started failing. I am curious if this is something you are observing across other repositories too?

Find below screenshot from one of the recent run. Somehow the wheel package is missing version info resulting in dist step failure.

Screen Shot 2022-06-27 at 7 27 44 PM

Afaik, nothing significant has been changed in the code. Looking forward to your thoughts.

@webknjaz
Copy link
Contributor Author

Interesting... On line 960 in your screenshot it clearly shows that the wheel that's been generated got a version 0.0 and not the desired one. So it's good that it fails and alerts about a problem.
I don't know why this is happening, though. I suppose that maybe something in your environment has changed. Usually, when this happens, it's because the Git checkout doesn't have sufficient commits to fill in the blanks between the current HEAD and the last tagged commit. So I'd start there. Although, it's weird that the tarball did get the right version and only the wheel did not.
I wonder if setuptools-scm got updated and maybe has a bug. Have you seen if the workflow started using a different version comparing to what's been there before?

@webknjaz
Copy link
Contributor Author

Looks like it may be this bug: pypa/setuptools_scm#727. It seems like v7.0.3 is supposed to get released with a fix some day. The root reason seams to be pypa/setuptools_scm#580 which got into v7+. So maybe just add ,!=7.0.0,!=7.0.1,!=7.0.2 to the version constraints in pyproject.toml for now?

@abhinavsingh
Copy link
Owner

Looks like it may be this bug: pypa/setuptools_scm#727. It seems like v7.0.3 is supposed to get released with a fix some day. The root reason seams to be pypa/setuptools_scm#580 which got into v7+. So maybe just add ,!=7.0.0,!=7.0.1,!=7.0.2 to the version constraints in pyproject.toml for now?

Thank you @webknjaz
You are always a life saver 🙏

@abhinavsingh
Copy link
Owner

abhinavsingh commented Sep 28, 2022

@webknjaz I have run into fever this past week :(. Just wanted to leave an update that I'll take a look at windows related issues (sorry, I lost track of original issue where you last commented, so posting an update here) over the coming weekend. Best.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Task Tasks & chores related to proxy.py
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants