-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Status of pip wheels with _GLIBCXX_USE_CXX11_ABI=1 #51039
Comments
marking hi-pri due to user activity in the previous issue (which still seems unresolved) |
I think the core problem is that _GLIBCXX_USE_CXX11_ABI=1 is not supported by compiler that is recommended by PEP513, see https://github.com/pypa/manylinux |
This is actually a bug that's been reported for RHEL / CentOS 7 (devtoolset-7): https://bugzilla.redhat.com/show_bug.cgi?id=1546704 Unfortunately if we want to provide |
My understanding is this is mostly an issue when you need to use torch with both python and C simultaneously. Is there a recommended "official/standard" way to get this setup work other than by building from source or wheel? It sounds like the only options are
|
To add another clarification on top of the previous clarifications: even moving to the manylinux2014 spec would not work around the hard limitation mentioned above. The upcoming PEP 600 perennial multibuild standard might be the first that would support this, since it would move past CentOS7 (so obviously the wheels would not be expected to work on CenOS7). All the infrastructure is in place for a manylinux_2_24 standard based on glibc 2.24 (debian stretch). The work needed to build the docker images is currently in progress, here is one of the active tracking issues pypa/manylinux#877 |
What's the best workaround at the moment? I'm not familiar with building pytorch from source, but will of course dive into it if needed. I'm exporting a C++ .so library hoping to import it in Python. Works well on mac but running into this issue on Linux machines. Any tips on workarounds? |
Would using the conda builds solve this problem? |
Sorry if this has already been discussed in previous issues: is there any reason not to build a |
Wheels on PyPI must support one of the known formats, so the |
In this case, it might make sense to serve the wheels as downloadable files from download.pytorch.org directly, similar to how the LibTorch packages are served (i.e. I'm not familiar with the current process of setting up wheel distribution through PyPl for the various platforms, but if there are automated ways of building the pip wheels for the three supported OS, perhaps some of these can be adapted and reused to build the C++11 wheels for each release (is anyone familiar with this process?) I guess what remains after this is just making the changes to the website (assuming this is the chosen distribution site) and uploading the wheels (+ posting hashes to verify, perhaps ?). |
Are there wheels available with D_GLIBCXX_USE_CXX11_ABI=1 ? |
@blackliner, right now -- not unless you build one yourself, from source. It's not a very difficult process, just requires a long time and CPU resources. Took me about half a day all told. But then you also have to build custom wheels for torchvision and torchaudio, depending on what you're using. |
NVIDIA supplies the world of Very weird that |
Just noting here that the pypa/manylinux#877 is now resolved & closed. I'm not sure what the status is on the newer standards, but manylinux2014 seems to have reached end of life at this point. |
Centos7 that underlies manylinux2014 will reach eol in 2024-06 so it still has a bit of life left. |
There's no reason to not start published PyTorch 1.12.0 provides
I don't think there's an issue with providing two sets of wheels with different ABI. On distros that support |
This issue should go into the pyTorch C++ Extention documentation as a warning... |
We have added _GLIBCXX_USE_CXX11_ABI=1 support for linux cpu wheel, based on the Ubuntu18.04 docker image pytorch/builder#990 and Centos 8 pytorch/builder#1023. |
This is true for pytorch=1.12.1 but pytorch=2.1.0.dev20230608 seems like it's not being built with CXX11_ABI. The package I installed today (Ubuntu 20.04, Python 3.8)
From a package installed previously (Ubuntu 16.04, Python 3.7):
|
Is there any update on this issue? Is it possible to offer cxx11 ABI wheel? |
Running into this issue as well working on a mixed python/c++ codebase which uses libtorch. It would be nice to install pytorch through pip/conda and link against that on the C++ end, but this is currently not feasible with cxx11 ABI off. |
Currently, the pytorch wheels available through `pip install` use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. This commit adds a lint check to search for use of `#include <regex>` in any C++ files. Use of this header should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039
This function should be used instead of `std::regex` within C++ call sites, to avoid ABI incompatibilities with pytorch. Currently, the pytorch wheels available through pip install use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. Use of the `<regex>` header in TVM should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039
This function should be used instead of `std::regex` within C++ call sites, to avoid ABI incompatibilities with pytorch. Currently, the pytorch wheels available through pip install use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. Use of the `<regex>` header in TVM should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039
* [Support] Add PackedFunc "tvm.support.regex_match" This function should be used instead of `std::regex` within C++ call sites, to avoid ABI incompatibilities with pytorch. Currently, the pytorch wheels available through pip install use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. Use of the `<regex>` header in TVM should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039 * [Redo][Unity] Split DecomposeOpsForTraining into two steps This is a reapplication of #15954, after resolving the breakages that required reverting in #16442. The regex matching is now implemented without the `#include <regex>` from the C++ stdlib, to avoid ABI incompatibility with pytorch. Prior to this commit, the `DecomposeOpsForTraining` transform directly replaced `relax.nn.batch_norm` into more primitive relax operations. This required the decomposed form of `relax.nn.batch_norm` to be duplicated with `DecomposeOpsForInference`. This commit refactors the pass to occur in two steps, first to apply training-specific mutations, and then to decompose. Having a clear `DecomposeOps` pass also has a clear single location for operator decomposition, which may be migrated into the operator definition in the future, similar to `FLegalize`.
Currently, the pytorch wheels available through `pip install` use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. This commit adds a lint check to search for use of `#include <regex>` in any C++ files. Use of this header should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039
Currently, the pytorch wheels available through `pip install` use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. This commit adds a lint check to search for use of `#include <regex>` in any C++ files. Use of this header should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039
Currently, the pytorch wheels available through `pip install` use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. This commit adds a lint check to search for use of `#include <regex>` in any C++ files. Use of this header should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039
Currently, the pytorch wheels available through `pip install` use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. This commit adds a lint check to search for use of `#include <regex>` in any C++ files. Use of this header should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039
Currently, the pytorch wheels available through `pip install` use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. This commit adds a lint check to search for use of `#include <regex>` in any C++ files. Use of this header should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039
Currently, the pytorch wheels available through `pip install` use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. This commit adds a lint check to search for use of `#include <regex>` in any C++ files. Use of this header should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039
Currently, the pytorch wheels available through `pip install` use the pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user the pre-C++11 ABI, this would cause breakages with dynamically-linked LLVM environments. This commit adds a lint check to search for use of `#include <regex>` in any C++ files. Use of this header should be avoided, as its implementation is not supported by gcc's dual ABI. This ABI incompatibility results in runtime errors either when `std::regex` is called from TVM, or when `std::regex` is called from pytorch, depending on which library was loaded first. This restriction can be removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1` is available from PyPI. [0] pytorch/pytorch#51039
Any news? Running into the same issues here. It really makes it difficult to develop hybrid solutions. Unfortunately it's nigh impossible to convert people around me to use non-stock pip packages as base. |
There seem to be undocumented cxx11 wheels in download.pytorch.org: pip install "torch==2.3.0+cpu.cxx11.abi" -i https://download.pytorch.org/whl/
python -c "import torch; print(torch._C._GLIBCXX_USE_CXX11_ABI)"
True However, there are two limitations: (1) no GPU wheels; (2) no correct manylinux tags (it seems to be |
#17492 shows the history of this issue but it has been closed and buried for a long time. Torch pip wheels are compiled with _GLIBCXX_USE_CXX11_ABI=0, resulting in incompatibility with other libraries.
Is there any sort of status on this?
(Personal motivation: Our project takes many hours to compile because of needing to compile torch + dependencies from source. We can't link with the libraries from the pip wheel because of this issue, and forcing everything else to use _GLIBCXX_USE_CXX11_ABI=0 is a huge headache itself that will only result in more problems over time. Having _GLIBCXX_USE_CXX11_ABI=1 pip wheels would drastically simplify our, and I'm sure many others', builds).
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @seemethere @malfet @walterddr @yf225 @glaringlee
The text was updated successfully, but these errors were encountered: