Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Status of pip wheels with _GLIBCXX_USE_CXX11_ABI=1 #51039

Open
elias-work opened this issue Jan 25, 2021 · 26 comments
Open

Status of pip wheels with _GLIBCXX_USE_CXX11_ABI=1 #51039

elias-work opened this issue Jan 25, 2021 · 26 comments
Labels
high priority module: abi libtorch C++ ABI related problems module: binaries Anything related to official binaries that we release to users module: cpp Related to C++ API needs design triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@elias-work
Copy link

elias-work commented Jan 25, 2021

#17492 shows the history of this issue but it has been closed and buried for a long time. Torch pip wheels are compiled with _GLIBCXX_USE_CXX11_ABI=0, resulting in incompatibility with other libraries.

Is there any sort of status on this?

(Personal motivation: Our project takes many hours to compile because of needing to compile torch + dependencies from source. We can't link with the libraries from the pip wheel because of this issue, and forcing everything else to use _GLIBCXX_USE_CXX11_ABI=0 is a huge headache itself that will only result in more problems over time. Having _GLIBCXX_USE_CXX11_ABI=1 pip wheels would drastically simplify our, and I'm sure many others', builds).

cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @seemethere @malfet @walterddr @yf225 @glaringlee

@bdhirsh bdhirsh added high priority module: abi libtorch C++ ABI related problems module: binaries Anything related to official binaries that we release to users module: cpp Related to C++ API labels Jan 25, 2021
@bdhirsh
Copy link
Contributor

bdhirsh commented Jan 25, 2021

marking hi-pri due to user activity in the previous issue (which still seems unresolved)

@malfet
Copy link
Contributor

malfet commented Jan 25, 2021

I think the core problem is that _GLIBCXX_USE_CXX11_ABI=1 is not supported by compiler that is recommended by PEP513, see https://github.com/pypa/manylinux

@seemethere
Copy link
Member

This is actually a bug that's been reported for RHEL / CentOS 7 (devtoolset-7): https://bugzilla.redhat.com/show_bug.cgi?id=1546704

Unfortunately if we want to provide manylinux support, which are binaries built on CentOS 7, then we can't support _GLIBCXX_USE_CXX11_ABI=1 in our wheels.

@mruberry mruberry added needs design triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed triage review labels Jan 25, 2021
@elias-work
Copy link
Author

elias-work commented Jan 26, 2021

My understanding is this is mostly an issue when you need to use torch with both python and C simultaneously. Is there a recommended "official/standard" way to get this setup work other than by building from source or wheel? It sounds like the only options are

  • Special non-standard _GLIBCXX_USE_CXX11_ABI=1 wheels
  • Conda package? (I heard some people say it has this flag, but I haven't tested yet).
  • Build from source
  • Install and use both libtorch and Python package (don't think it will work for us)

@mattip
Copy link
Collaborator

mattip commented Jan 28, 2021

To add another clarification on top of the previous clarifications: even moving to the manylinux2014 spec would not work around the hard limitation mentioned above. The upcoming PEP 600 perennial multibuild standard might be the first that would support this, since it would move past CentOS7 (so obviously the wheels would not be expected to work on CenOS7). All the infrastructure is in place for a manylinux_2_24 standard based on glibc 2.24 (debian stretch). The work needed to build the docker images is currently in progress, here is one of the active tracking issues pypa/manylinux#877

@EricSteinberger
Copy link

What's the best workaround at the moment? I'm not familiar with building pytorch from source, but will of course dive into it if needed. I'm exporting a C++ .so library hoping to import it in Python. Works well on mac but running into this issue on Linux machines. Any tips on workarounds?

@mattip
Copy link
Collaborator

mattip commented Feb 15, 2021

Would using the conda builds solve this problem?

@EricSteinberger
Copy link

EricSteinberger commented Feb 15, 2021

@mattip Thank you for the tip. Unfortunately, nope. Running python -c "import torch; print(torch._C._GLIBCXX_USE_CXX11_ABI)" with the conda version also yields False

I tried compiling from source but am facing #47993.

Any feedback or tips would be appreciated.

@Algomorph
Copy link

Sorry if this has already been discussed in previous issues: is there any reason not to build a manylinux-compatible wheel set and another _GLIBCXX_USE_CXX11_ABI=1 wheel set? It seems like this could be another switch / option to choose from on the "Get Started" Page.

@mattip
Copy link
Collaborator

mattip commented Feb 24, 2021

Wheels on PyPI must support one of the known formats, so the _GLIBCXX_USE_CXX11_ABI=1 wheels should be hosted elsewhere, and users told specifically what flags to use with pip to force it to use an incompatible wheel.

@Algomorph
Copy link

Algomorph commented Feb 24, 2021

Wheels on PyPI must support one of the known formats, so the _GLIBCXX_USE_CXX11_ABI=1 wheels should be hosted elsewhere, and users told specifically what flags to use with pip to force it to use an incompatible wheel.

In this case, it might make sense to serve the wheels as downloadable files from download.pytorch.org directly, similar to how the LibTorch packages are served (i.e. C++11 Wheels category for Package or something more user-friendly).

I'm not familiar with the current process of setting up wheel distribution through PyPl for the various platforms, but if there are automated ways of building the pip wheels for the three supported OS, perhaps some of these can be adapted and reused to build the C++11 wheels for each release (is anyone familiar with this process?)

I guess what remains after this is just making the changes to the website (assuming this is the chosen distribution site) and uploading the wheels (+ posting hashes to verify, perhaps ?).

@blackliner
Copy link

Are there wheels available with D_GLIBCXX_USE_CXX11_ABI=1 ?

@Algomorph
Copy link

@blackliner, right now -- not unless you build one yourself, from source. It's not a very difficult process, just requires a long time and CPU resources. Took me about half a day all told. But then you also have to build custom wheels for torchvision and torchaudio, depending on what you're using.

@blackliner
Copy link

NVIDIA supplies the world of aarch64 with pretty much all you need (and with the new GLIBCXX_USE_CXX11_ABI): https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-7-0-now-available/72048

Very weird that x86_64 is lacking such a variety 😞

@Algomorph
Copy link

Just noting here that the pypa/manylinux#877 is now resolved & closed. I'm not sure what the status is on the newer standards, but manylinux2014 seems to have reached end of life at this point.

@mattip
Copy link
Collaborator

mattip commented Dec 7, 2021

manylinux2014 seems to have reached end of life at this point

Centos7 that underlies manylinux2014 will reach eol in 2024-06 so it still has a bit of life left.

@rgommers
Copy link
Collaborator

Centos7 that underlies manylinux2014 will reach eol in 2024-06 so it still has a bit of life left.

There's no reason to not start published manylinux_2_28 wheels though. Those will support _GLIBCXX_USE_CXX11_ABI=1. manylinux_2_24 is fine to ignore (going EOL now). See pypa/manylinux#1332, that discussed is extremely informative about which manylinux flavors have which GCC, support which Python versions and distros, and also touches on _GLIBCXX_USE_CXX11_ABI.

PyTorch 1.12.0 provides manylinux1 and manylinux2014 wheels, see https://pypi.org/project/torch/1.12.0/. I believe the next release should:

  • drop manylinux1
  • keep manylinux2014 (unchanged)
  • add manylinux_2_28 wheels built with _GLIBCXX_USE_CXX11_ABI=1

I don't think there's an issue with providing two sets of wheels with different ABI. On distros that support 2_28, those wheels will be preferred by Pip. And old distros will get manylinux2014. Packages that depend on PyTorch can/should build matching sets of wheels.

@adizhol
Copy link

adizhol commented Nov 17, 2022

This issue should go into the pyTorch C++ Extention documentation as a warning...
https://pytorch.org/tutorials/advanced/cpp_extension.html

@zhuhong61
Copy link
Contributor

We have added _GLIBCXX_USE_CXX11_ABI=1 support for linux cpu wheel, based on the Ubuntu18.04 docker image pytorch/builder#990 and Centos 8 pytorch/builder#1023.

@CLARKBENHAM
Copy link

CLARKBENHAM commented Jun 9, 2023

This is true for pytorch=1.12.1 but pytorch=2.1.0.dev20230608 seems like it's not being built with CXX11_ABI.

The package I installed today (Ubuntu 20.04, Python 3.8)

$ conda list -f pytorch
# packages in environment at /root/local/miniconda:
#
# Name                    Version                   Build  Channel
pytorch                   2.1.0.dev20230608     py3.8_cpu_0    pytorch-nightly
$ python -c "import torch; print(torch._C._GLIBCXX_USE_CXX11_ABI)"
False

From a package installed previously (Ubuntu 16.04, Python 3.7):

$ conda list -f pytorch
# packages in environment at /root/local/miniconda:
#
# Name                    Version                   Build  Channel
pytorch                   1.12.1          cpu_py37h9dbd814_1  
$ python -c "import torch; print(torch._C._GLIBCXX_USE_CXX11_ABI)"
True

@nsmithtt
Copy link

Is there any update on this issue? Is it possible to offer cxx11 ABI wheel?

@tuero
Copy link

tuero commented Jan 12, 2024

Running into this issue as well working on a mixed python/c++ codebase which uses libtorch. It would be nice to install pytorch through pip/conda and link against that on the C++ end, but this is currently not feasible with cxx11 ABI off.

Lunderberg added a commit to Lunderberg/tvm that referenced this issue Jan 16, 2024
Currently, the pytorch wheels available through `pip install` use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0].  If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

This commit adds a lint check to search for use of `#include <regex>`
in any C++ files.  Use of this header should be avoided, as its
implementation is not supported by gcc's dual ABI.  This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.

This restriction can be removed when a version of pytorch compiled
using `-DUSE_CXX11_ABI=1` is available from PyPI.

[0] pytorch/pytorch#51039
Lunderberg added a commit to Lunderberg/tvm that referenced this issue Jan 24, 2024
This function should be used instead of `std::regex` within C++ call
sites, to avoid ABI incompatibilities with pytorch.

Currently, the pytorch wheels available through pip install use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

Use of the `<regex>` header in TVM should be avoided, as its
implementation is not supported by gcc's dual ABI. This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.  This restriction can be
removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1`
is available from PyPI.

[0] pytorch/pytorch#51039
Lunderberg added a commit to Lunderberg/tvm that referenced this issue Jan 24, 2024
This function should be used instead of `std::regex` within C++ call
sites, to avoid ABI incompatibilities with pytorch.

Currently, the pytorch wheels available through pip install use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

Use of the `<regex>` header in TVM should be avoided, as its
implementation is not supported by gcc's dual ABI. This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.  This restriction can be
removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1`
is available from PyPI.

[0] pytorch/pytorch#51039
Lunderberg added a commit to apache/tvm that referenced this issue Feb 6, 2024
* [Support] Add PackedFunc "tvm.support.regex_match"

This function should be used instead of `std::regex` within C++ call
sites, to avoid ABI incompatibilities with pytorch.

Currently, the pytorch wheels available through pip install use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

Use of the `<regex>` header in TVM should be avoided, as its
implementation is not supported by gcc's dual ABI. This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.  This restriction can be
removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1`
is available from PyPI.

[0] pytorch/pytorch#51039

* [Redo][Unity] Split DecomposeOpsForTraining into two steps

This is a reapplication of #15954,
after resolving the breakages that required reverting in
#16442.  The regex matching is now
implemented without the `#include <regex>` from the C++ stdlib, to
avoid ABI incompatibility with pytorch.

Prior to this commit, the `DecomposeOpsForTraining` transform directly
replaced `relax.nn.batch_norm` into more primitive relax operations.
This required the decomposed form of `relax.nn.batch_norm` to be
duplicated with `DecomposeOpsForInference`.  This commit refactors the
pass to occur in two steps, first to apply training-specific
mutations, and then to decompose.

Having a clear `DecomposeOps` pass also has a clear single location
for operator decomposition, which may be migrated into the operator
definition in the future, similar to `FLegalize`.
Lunderberg added a commit to Lunderberg/tvm that referenced this issue Feb 8, 2024
Currently, the pytorch wheels available through `pip install` use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0].  If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

This commit adds a lint check to search for use of `#include <regex>`
in any C++ files.  Use of this header should be avoided, as its
implementation is not supported by gcc's dual ABI.  This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.

This restriction can be removed when a version of pytorch compiled
using `-DUSE_CXX11_ABI=1` is available from PyPI.

[0] pytorch/pytorch#51039
Lunderberg added a commit to Lunderberg/tvm that referenced this issue Feb 23, 2024
Currently, the pytorch wheels available through `pip install` use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0].  If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

This commit adds a lint check to search for use of `#include <regex>`
in any C++ files.  Use of this header should be avoided, as its
implementation is not supported by gcc's dual ABI.  This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.

This restriction can be removed when a version of pytorch compiled
using `-DUSE_CXX11_ABI=1` is available from PyPI.

[0] pytorch/pytorch#51039
Lunderberg added a commit to Lunderberg/tvm that referenced this issue Feb 29, 2024
Currently, the pytorch wheels available through `pip install` use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0].  If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

This commit adds a lint check to search for use of `#include <regex>`
in any C++ files.  Use of this header should be avoided, as its
implementation is not supported by gcc's dual ABI.  This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.

This restriction can be removed when a version of pytorch compiled
using `-DUSE_CXX11_ABI=1` is available from PyPI.

[0] pytorch/pytorch#51039
Lunderberg added a commit to Lunderberg/tvm that referenced this issue Mar 8, 2024
Currently, the pytorch wheels available through `pip install` use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0].  If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

This commit adds a lint check to search for use of `#include <regex>`
in any C++ files.  Use of this header should be avoided, as its
implementation is not supported by gcc's dual ABI.  This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.

This restriction can be removed when a version of pytorch compiled
using `-DUSE_CXX11_ABI=1` is available from PyPI.

[0] pytorch/pytorch#51039
tqchen pushed a commit to apache/tvm that referenced this issue Mar 10, 2024
Currently, the pytorch wheels available through `pip install` use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0].  If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

This commit adds a lint check to search for use of `#include <regex>`
in any C++ files.  Use of this header should be avoided, as its
implementation is not supported by gcc's dual ABI.  This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.

This restriction can be removed when a version of pytorch compiled
using `-DUSE_CXX11_ABI=1` is available from PyPI.

[0] pytorch/pytorch#51039
Lunderberg added a commit to Lunderberg/tvm that referenced this issue Mar 12, 2024
Currently, the pytorch wheels available through `pip install` use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0].  If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

This commit adds a lint check to search for use of `#include <regex>`
in any C++ files.  Use of this header should be avoided, as its
implementation is not supported by gcc's dual ABI.  This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.

This restriction can be removed when a version of pytorch compiled
using `-DUSE_CXX11_ABI=1` is available from PyPI.

[0] pytorch/pytorch#51039
thaisacs pushed a commit to thaisacs/tvm that referenced this issue Apr 3, 2024
Currently, the pytorch wheels available through `pip install` use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0].  If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

This commit adds a lint check to search for use of `#include <regex>`
in any C++ files.  Use of this header should be avoided, as its
implementation is not supported by gcc's dual ABI.  This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.

This restriction can be removed when a version of pytorch compiled
using `-DUSE_CXX11_ABI=1` is available from PyPI.

[0] pytorch/pytorch#51039
@CalaveraLoco
Copy link

Any news? Running into the same issues here. It really makes it difficult to develop hybrid solutions. Unfortunately it's nigh impossible to convert people around me to use non-stock pip packages as base.

@njzjz
Copy link
Contributor

njzjz commented May 15, 2024

There seem to be undocumented cxx11 wheels in download.pytorch.org:

pip install "torch==2.3.0+cpu.cxx11.abi" -i https://download.pytorch.org/whl/
python -c "import torch; print(torch._C._GLIBCXX_USE_CXX11_ABI)"
True

However, there are two limitations: (1) no GPU wheels; (2) no correct manylinux tags (it seems to be manylinux_2_28).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
high priority module: abi libtorch C++ ABI related problems module: binaries Anything related to official binaries that we release to users module: cpp Related to C++ API needs design triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests