New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installation of torch and torchvision not happening with Poetry #64520
Comments
Removed |
Is this because poetry also considers the extra versioning post This seems to go against the standard python versioning model of just ignore what comes after the Edit: Yes according to PEP 440, poetry should be ignoring these local versions: https://www.python.org/dev/peps/pep-0440/#local-version-identifiers |
Removing the windows label since this isn't necessarily related to windows |
|
If it's really a poetry bug we should file an upstream bug with them |
I remember seeing a similar issue for pytorch 1.8 as well. I will link it here if I see it |
Few issues that I came across: python-poetry/poetry#4231 python-poetry/poetry#2613 |
python-poetry/poetry#4221 seems like it would address the local version… |
We have PEP 503 compliant indices now so this can be closed |
So is this fixed? Tested pytorch 1.11.0 cuda 11.3 whlz and still not working for me. |
I agree that this issue is still present with all version of pytorch. I don't know if pytorch or poetry need to fix this, but I tried pretty much everything, there is no way to install both torch and torchvision with a specific build (cpu, cuXXX). Because
|
@TCherici I have spent a full day (in May) trying all possible ideas I could think of to solve this issue, and found no real clean solution. The least shitty solution I chose is this: [tool.poetry.dependencies]
torch = { version = "~1.10.2", optional = true }
torchvision = { version = "^0.11.3", optional = true }
[tool.poetry.extras]
torch = ["torch", "torchvision"]
# Relies on https://github.com/nat-n/poethepoet
[tool.poe.tasks]
install-pytorch = "pip install --force-reinstall --no-deps --no-cache-dir torch==1.10.2+cu111 torchvision==0.11.3+cu111 -f https://download.pytorch.org/whl/torch_stable.html" To create a dev venv on mac we do:
And in the dockerfile to use on GPU server we do: poetry install --no-dev
poe install-pytorch |
@ThomasRobertFr Thank you very much for the thorough explanation! |
@TCherici As mentioned by #64520 (comment) the bug is clearly on poetry's side. And I agree it's very annoying for people using pytorch. Either you don't use poetry or you have to use hacks to choose the right cuda version... I just created a new issue there: python-poetry/poetry#5863 |
I've played around with stuff, and managed to get torch and torchvision to run with poetry. installation of torch==1.11.0+cu113 and torchvision==0.12.0+cu113:
I am not certain, but I think that it is important to install |
Indeed the problem is solved in poetry 1.2. I'm waiting for a stable version though... |
my
|
@ThomasRobertFr from your submitted issue #5863, it seems you only saw the issue resolved with fixed wheel URLs?
with an appropriately configured secondary source (confirmed that I can install e.g. [[tool.poetry.source]]
name = "torch"
url = "https://download.pytorch.org/whl/cu113"
default = false
secondary = true which still returns an error hinting at the + tags being used improperly:
Is it simply a question of this working on master, but not the prerelease? Or is it still open? |
@Bonnevie Hi, The following file resolved and installed properly with [tool.poetry]
name = "test-pep-404"
version = "0.1.0"
description = ""
authors = [""]
[tool.poetry.dependencies]
python = "~3.7"
torch = { version = "^1.10.2", source="torch" }
torchvision = { version = "^0.11.3", source="torch" }
torchaudio = { version = "^0.10.0", source="torch" }
[[tool.poetry.source]]
name = "torch"
url = "https://download.pytorch.org/whl/cu113"
default = false
secondary = true
[build-system]
requires = ["poetry_core>=1.0.0"]
build-backend = "poetry.core.masonry.api" Are you sure you're using poetry However, with your
Which is a different error compared to you, and I'm not getting the same versions, my poetry does not add This 403 issue is already here for poetry: python-poetry/poetry#4885 |
need to install cuda-compatible torch by passing a suffix, which can be done as explained in pytorch/pytorch#64520 (comment) took me hours but gpu is usable
need to install cuda-compatible torch by passing a suffix, which can be done as explained in pytorch/pytorch#64520 (comment) took me hours but gpu is usable
* add compose extension for running gpus * move container to use poetry instead of anaconda * update dependencies in jupyter container * add docker volume for caching data in jupyter container * update CLI skeleton * update dependencies * add placeholders for HF and WANDB tokens * add git and git-lfs to dockerfile needed by huggingface * add basic training script * add volume for persisting virtualenvironment * fix pytorch dependency issue need to install cuda-compatible torch by passing a suffix, which can be done as explained in pytorch/pytorch#64520 (comment) took me hours but gpu is usable * log model automatically https://docs.wandb.ai/guides/integrations/huggingface * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Updated the file with current changes. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * implement streamlit dashboard * minor fixes bump style and dependencies pointer to app should be myapp instead * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * A basic app for streamlit without serverless * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * housekeeping * update sample .env rename backend service to restapi * rename dashboard service to streamlit-ui * update dependencies and update streamlit script with kushal's input * update dependencies of streamlit service and fix bound pyproject.toml * fix CLI and pyproject.toml * bump streamlit app only update aesthetics, use altair and emojis for labels * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix docker files and pypoetry.toml * rename service to lambda * add entry for lambda service to docker compose * link streamlit ui to lambda-aws container * update lambda api container define compose service update requirements update Dockerfile * update data loading method * add missing HF token to compose * update torch repo url to match cuda 11.6 * rename streamlit service * update dockerfiles so they install their CLIs * add test model functionality * add upload to s3 bucket method * rename streamlit-ui again I renamed it back accidentally * add download from s3 to lambda api * Revert "add download from s3 to lambda api" This reverts commit 50c65be. * remove venv volumes from containers * Update README and Makefile adds instructions to run services with docker compose * Updated file name in Makefile * hotfix: add boto3 to jupyter container dependencies * wraps training script into CLI * expose train command from jupyter CLI through make * hotfix: bad argument name * add gpu support to train command in Make * updated pre-commit * removed unused imports in app folder * removed unused comments and import * Update services/jupyter/src/app/model/train_model.py Co-authored-by: Edoardo Abati <29585319+EdAbati@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove unused code * remove unused code * add a README for the streamlit service * remove restapi service it will be replaced by the lambda service * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Kushal Atul Ramaiya <kushalramaiya@gmail.com> Co-authored-by: Edoardo Abati <29585319+EdAbati@users.noreply.github.com>
* created notebook service * added notebook for active learning * minor fix of .env.sample * updated conda env * added active_learning_lib_cache to gitignore * moved requirements to conda env * fixed dockerfile * updated notebook * temporary docker-compose for dev * added some env vars in .env * updated notebook * change port temp * added modal test notebook * add compose extension for running gpus * move container to use poetry instead of anaconda * update dependencies in jupyter container * add docker volume for caching data in jupyter container * update CLI skeleton * update dependencies * add placeholders for HF and WANDB tokens * add git and git-lfs to dockerfile needed by huggingface * add basic training script * add volume for persisting virtualenvironment * fix pytorch dependency issue need to install cuda-compatible torch by passing a suffix, which can be done as explained in pytorch/pytorch#64520 (comment) took me hours but gpu is usable * log model automatically https://docs.wandb.ai/guides/integrations/huggingface * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Updated the file with current changes. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * implement streamlit dashboard * minor fixes bump style and dependencies pointer to app should be myapp instead * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * A basic app for streamlit without serverless * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * housekeeping * update sample .env rename backend service to restapi * rename dashboard service to streamlit-ui * update dependencies and update streamlit script with kushal's input * update dependencies of streamlit service and fix bound pyproject.toml * fix CLI and pyproject.toml * bump streamlit app only update aesthetics, use altair and emojis for labels * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix docker files and pypoetry.toml * rename service to lambda * add entry for lambda service to docker compose * link streamlit ui to lambda-aws container * update lambda api container define compose service update requirements update Dockerfile * update data loading method * add missing HF token to compose * update torch repo url to match cuda 11.6 * rename streamlit service * update dockerfiles so they install their CLIs * add test model functionality * add upload to s3 bucket method * rename streamlit-ui again I renamed it back accidentally * add download from s3 to lambda api * Revert "add download from s3 to lambda api" This reverts commit 50c65be. * remove venv volumes from containers * Update README and Makefile adds instructions to run services with docker compose * Updated file name in Makefile * hotfix: add boto3 to jupyter container dependencies * wraps training script into CLI * expose train command from jupyter CLI through make * hotfix: bad argument name * add gpu support to train command in Make * updated pre-commit * removed unused imports in app folder * removed unused comments and import * Update services/jupyter/src/app/model/train_model.py Co-authored-by: Edoardo Abati <29585319+EdAbati@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove unused code * remove unused code * add a README for the streamlit service * remove restapi service it will be replaced by the lambda service * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add rubrix and loguru to jupyter service * add snorkel to dependencies * add rubrix url env var * create a copy of active learning notebooks inside jupyter container * add modAL and skorch to dependencies * update load_data method to support bundled dataset * add small_text to dependencies * add rubrix[listeners] to dependencies * update loading data in small-text notebook * update RUBRIX_API_URL usage in notebook * update data loading method and remove unnecesary installs * add active learning loop with Rubrix and small-text * fix:load full dataset + update rb url Co-authored-by: Edoardo Abati <29585319+EdAbati@users.noreply.github.com> Co-authored-by: Diego Quintana <daquintanav@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Kushal Atul Ramaiya <kushalramaiya@gmail.com>
Hello,
I am trying to install torch and torchvision using Poetry. I am getting the following issue:
Poetry was installed using pip and have added the following details related to Python version, poetry version, OS Details:
Python version: 3.9.6 (cPython)
Poetry version: 1.1.8
OS: Windows 10 (Version: 21H1; OS Build: 19043.1165)
In Poetry, creation of virtual environment by poetry is disabled.
My pyproject.toml file looks like this:
Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home Single Language
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19043-SP0
Is CUDA available: N/A
CUDA runtime version: 11.4.48
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050 Ti
Nvidia driver version: 471.96
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] mypy==0.910
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] torchsummary==1.5.1
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @seemethere @malfet @peterjc123 @mszhanyi @skyline75489 @nbcsm
The text was updated successfully, but these errors were encountered: