New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instructions for installing PyTorch #6409
Comments
Solution 1 seems infeasible when working in a team with machines on different operating systems, due to the need of providing the complete URL of the wheel including operating system and exact version number. Solution 2 seems to work, but it results in downloading every single PyTorch version that can found, independent on the operating system. I'm running Windows and the download looks like this: A single installation takes around 15-20 minutes at ~250 Mbps. |
Poetry will always download wheels for every platform when you install -- this is because there is no other way to get package metadata from a repository using PEP 503's API. |
Can you elaborate a little on what metadata is needed and why downloading every conceivable version of a package yields that metadata? As mentioned this leads to a ~20min install for one package |
It's not convenient, but it should be feasible with multiple constraints dependencies. |
Poetry requires the package's core metadata aka the However, Poetry is unlikely to grow support for these new APIs until PyPI does, and I think third party repos are unlikely to implement it before PyPI. Eventually support for these APIs will allow for feature and performance parity in Poetry between PyPI and third-party repositories. Until then, we are stuck with the legacy HTML API, which requires us to download every package when generating a lock file for the first time. After your cache is warm you will not need to download again, and on other platforms you will only download the necessary files as the metadata is captured in the lock file. |
What I'm not understanding is that poetry knows I'm on Linux with python 3.8 but it still downloads
Or does that wheel not contain the core metadata that is needed? |
Also there seems to be a second problem going on here unless I've misunderstood the documentation I have this in my pyproject.toml
i.e. Yet poetry is asking that repository for every package I try to install. I thought from the documentation that secondary meant it would go to pypi for any package unless specifically asked to go to that custom repository. |
Poetry constructs a universal lock file -- we write hashes to the lock file for all supported platforms. Thus on the first machine you generate a lock file, you will download a wheel for every supported platform. There is no way to write hashes to the lock file for those foreign/other platform versions without downloading them first. If you want to reduce the scope of this a bit, you can tighten your Python constraint. There is a prototype of a new feature at #4956 (though it needs resurrection, design, and testing work) to add arbitrary markers to let a project reduce its supported platforms as an opt-in.
I think it might be you misreading -- that is the intended and documented behavior. There is a proposal to introduce new repository types at #5984 (comment) as the current |
OK this makes sense now, thanks for the explanation, looking forward to that PR hopefully being merged eventually
I was misreading I see that this is intended behavior
I agree with that and I hope that these new repository types can be implemented |
Can't wait for option 2 to have good performance ! |
Please 👍 on issues instead of commenting me too -- it keeps the notifications down and still shows interest. Thanks! |
Poetry is not yet ready to handle the different versions of PyTorch and Torchvision. See the related issue: python-poetry/poetry#6409
Probably a follow-up issue on the second option:
I guess it tries to load that from the secondary repo as well, and expects to use the keyring due to the unauthorized thing? |
That's #1917 -- our use of keyring hits surprisingly many system configurations in which hard errors occur, and it needs some work. |
@neersighted at least PyPI seems have started work on the new json APi |
Indeed, Poetry 1.2.2 relies on the new PEP 691 support. However, PEP 658 is the real blocker for better performance in third-party repos -- there is a long-running PR blocked on review and a rather combative contributor, but otherwise no major progress on that front. |
could you add a link to the blocked PR? I switched today to method 1 because method 2 took ages. Seems the meta servers are slowed down today... Dependency resolving which takes up to 4000 seconds for method 1 is also insane. And then it failed because I accidental copied 1.12.0 instead of 1.12.1 for the windows release. I really like the idea of poetry but this needs huge improvement. |
We use neither these approaches. As the gpu versions have the same dependencies as the base version this should be OK. The big downsides are
The big upside is that it is very easy to create make scripts for different machines - and that its pretty fast (very important for ci/cd) [tool.poetry.dependencies]
python = "^3.10"
numpy = "^1.23.2"
torch = "1.12.1"
torchvision = "0.13.1" install_cu116:
poetry install
poetry run pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html |
this has essentially been our approach, but in Dockerfiles, gated by a build arg
This also allows installing the CPU version which is smaller a smaller package that lacks the CUDA drivers that come in the normal pytorch package from pypi. They help to slim the images down. |
Hi ! Do you know if it's possible to specify two different optional versions of torch in the pyproject.toml ? I would like to use a cpu version locally and a gpu version on a distant server. You can have a look at an example in this stack overflow post. |
That is something not unlike #5222; the consensus has been that as Poetry is an interoperable tool, no functionality will be added to the core project to support this until there is a standards-based method. A plugin can certainly support this with some creativity and would be the immediate "I want Poetry to support this" use case solution in my mind. |
Thanks for your answer, do you have any such plugin in mind ? |
I don't have any links at hand, but building on top of Light the Torch has been discussed. But if you mean if I know anyone is working on one, no, not that I am aware of. |
For as long as |
I don't see that as a Poetry problem -- the issue is that there is an unmet packaging need, and no one from the ML world is working with the PyPA to define how to handle this robustly/no one from the PyPA has an interest in solving it. Likewise, there is no interest in Poetry in supporting idiosyncratic workarounds for a non-standard and marginally compatible ecosystem; we'll be happy to implement whatever standards-based process evolves to handle these binary packages, but in the mean time any special-casing and package-specific functionality belong in a plugin and not Poetry itself. |
Do I understand correctly there is no easy way to use poetry for projects using pytorch and expecting to get cross-platform GPU acceleration? Started a new project, thought that something better than pip would be nice to use for once. Run through poetry introduction and basic usage docs, installed poetry, created a new project, added click. Very nice so far. Then tried to add pytorch. Looked at the documentation here https://python-poetry.org/docs/repositories/ Got the following in pyproject.toml: [tool.poetry.dependencies]
python = "^3.11"
click = ">=8.1.7"
[[tool.poetry.source]]
name = "torch-cu121"
url = "https://download.pytorch.org/whl/cu121"
priority = "supplemental"
[[tool.poetry.source]]
name = "torch-cu118"
url = "https://download.pytorch.org/whl/cu118"
priority = "supplemental"
[[tool.poetry.source]]
name = "torch-rocm56"
url = "https://download.pytorch.org/whl/rocm5.6"
priority = "supplemental"
[[tool.poetry.source]]
name = "torch-cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "supplemental" And got the following after > poetry add --source torch-cu121 torch torchvision
Using version ^2.1.2+cu121 for torch
Using version ^0.16.2+cu121 for torchvision
Updating dependencies
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl 11% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl 29% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl 47% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl 63% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl 80% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl 99% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl 14% (0.7s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl 37% (0.8s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl 59% (0.9s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl 75% (1.0s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl 99% (1.1s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl 9% (1
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl 27% (1
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl 45% (1
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl 60% (1
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl 77% (2
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl 94% (2
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl 2% (2.2s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl 25% (2.3s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl 48% (2.4s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl 65% (2.5s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl 82% (2.6s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl 91% (2.7s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 6% (2.8
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 12% (2.9
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 21% (3.0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 33% (3.1
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 42% (3.2
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 50% (3.3
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 61% (3.4
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 71% (3.5
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 80% (3.6
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 89% (3.7
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl 99% (3.8
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp39-cp39-linux_x86_64.whl 19% (5.3
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp39-cp39-linux_x86_64.whl 37% (5.4
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp39-cp39-linux_x86_64.whl 52% (5.5
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp39-cp39-linux_x86_64.whl 70% (5.6
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp39-cp39-linux_x86_64.whl 89% (5.8
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-linux_x86_64.whl 29% (65.3s)
^C^C^C Killed poetry as soon as seen what it tries to download so not sure if it would have tried to install all of the downloads too. Expected something at least not a lot worse than plain |
Using following workaround around the setup described in #6409 (comment):
and then installing the dependencies:
Seems like Hope there will be better solution in the near future (: |
a couple of relevant fixes recently
the upshot is that the sequence given in #6409 (comment) when run with the latest master poetry results in locking succeeding in roughly 5 seconds (with downloads of roughly no gigabytes). of course there still are a lot of wheels to download during installation, but there really is nothing much to be done about that. |
Thank you, install worked as expected after poetry update to master branch! |
I tried the @mathewcohle's approach but I get the following with However, I get the following, and think it might be related.
What Python version are you using, and on what platform? EDIT: Upon further testing, the error continues even when no operations are being taken with torch. It seems it was caused by TensorFlow, which was also a dependency. In addition to installing Torch with GPU capability, is it possible to install TensorFlow with CUDA ( |
For me @mathewcohle's approach results in neither version being installed when I type The only difference I see is I have two groups one which uses torch cpu and the other torch gpu. Also I have a private pypi repo as well (AWS Code Artifact) which contains torch, so I wonder if the lock file is being incorrectly generated because many packages I use have torch as a dependency. I try to explicitly specify torch as a dependency first (before say transformers) but perhaps this is failing. I'm trying to create different environments during my build process. I have a docker file which I construct using one group with torch cpu, a local experiments group where I want torch on my workstation with cuda, and a build group which doesn't require either version (it just registers pipeline steps in sagemaker using the built docker image(s). |
If not aware I want to mention that Tensorflow does not support native Windows anymore with cuda support. |
A follow up on my comment, the reason my attempt failed was I wanted to add torch to optional groups, and I wanted to be able to specify a different architecture per group (i.e. I have a notebooks group for local experiments, a docker group for building AWS pipeline steps and a build group which runs my build scripts, I don't want to install any version of torch on the build server, the docker container must use the specific version of cuda shipped in the AWS container, and for local experiments I want to select whatever is appropriate for my workstation.) So far I've not been able to do this, the closest I've got is to install
Is there any way of marking both torch versions optional, and then using extra's / markers to install either one or the other or neither? The issue I see here is there are multiple other groups, some of which contain packages that rely on torch. So I can see where the complexity is. In my case my groups are mutually exclusive but there's no way of expressing that contraint in poetry. |
So the issue I have is each time I run the install command it swaps between cuda and cpu, i.e.
etc |
@david-waterworth This has already been found out above in this comment. To be fair I'm not quite clear on why. I suspect One thing I have yet to try is to have all torch version be marked as optional, and filtered on a specific extra ( If this doesn't work, then I don't understand what the |
I think this is not needed anymore. Removing it also solves some headaches with Poetry (see python-poetry/poetry#6409) as well as allow us to relax our Python interpreter version constraints.
@QuentinSoubeyranAqemia I also think that I am trying something like: [tool.poetry.group.remote_cpu]
optional = true
[tool.poetry.group.remote_cuda]
optional = true
[tool.poetry.group.remote_mps]
optional = true
[tool.poetry.group.remote_cpu.dependencies]
torch = {version = "^2.2.0", source = "pytorch-cpu", markers = "extra=='cpu' and extra!='mps' and extra!='cuda'"}
[tool.poetry.group.remote_cuda.dependencies]
torch = {version = "^2.2.0", source = "pytorch-cu121", markers = "extra=='cuda' and extra!='mps' and extra!='cpu'"}
[tool.poetry.group.remote_mps.dependencies]
torch = {version = "^2.2.0", markers = "extra=='mps' and extra!='cuda' and extra!='cpu'"}
[tool.poetry.extras]
cpu = ["cpu"]
cuda = ["cuda"]
mps = ["mps"] However, this seems not to work. It really looks like the |
The issue with the marker extra is well known, see #7748 |
@DWarez I think also the value extra's are supposed to be the package name as well aren't they - i.e.
I'm not totally sure how it's supposed to work, or if it's working as expected and we're abusing it. Also so far the only way I've got this close to working is to add torch as a main dependency. Adding it to multiple optional groups always seems to in
I was thinking it would also be nice if upstream (torch) was refactored so there was a base package (i.e. |
@david-waterworth I tried a lot of different tricks, also the one you just mentioned, but still I cannot make things work when trying configure for cpu, cuda and mps. The trick described in #6409 (comment) works, however it seems like that when defining multiple conditions in the markers, some of them (if not all) are ignored. e.g. |
@DWarez Note that groups and extras are not the same thing, and you are mixing those up.
Markers are a PyPA specification and thus aren't related to the groups you define. @creat89 Thank you for #7748, it escaped my radar and this seems to be the relevant piece. |
Would this be the correct usage of extras and groups? Curious as I'm facing the same problem wherein [tool.poetry.dependencies]
torch = {version = "2.1.*", source = "pytorch-cpu", markers = "extra!='cuda'" }
tensorflow = {version = "^2.14.0", markers = "extra!='cuda'"}
...
[tool.poetry.group.gpu]
optional = true
[tool.poetry.group.gpu.dependencies]
torch = {version = "2.1.*", source = "pytorch-cu121", markers = "extra=='cuda'"}
tensorflow = {version = "^2.14.0", extras = ["and-cuda"], markers = "extra=='cuda'"}
[tool.poetry.extras]
# Might be better to rename this to nocpu since it's more accurate
cuda = []
[[tool.poetry.source]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "explicit"
[[tool.poetry.source]]
name = "pytorch-cu121"
url = "https://download.pytorch.org/whl/cu121"
priority = "explicit" |
adding in
Solved my problem, of trying to just install cpu torch |
pytorch say
|
you can use this command and run pip from poetry:
|
My working pyproject if it can help anybody... (Poetry 1.8.2) I edited my pyproject.toml with a .ps1 script then: poetry lock --no-update i am using a local pypi server, but you can use the configurated source(s) regarding your needs: [tool.poetry] [tool.poetry.dependencies] Cython = { path = "C:/AI/POETRYGHOST/Cython-3.0.9-cp310-cp310-win_amd64.whl" } triton = { path = "C:/AI/POETRYGHOST/triton-2.1.0-cp310-cp310-win_amd64.whl" } nvidia_cudnn_cu12 = { path = "C:/AI/POETRYGHOST/nvidia_cudnn_cu12-8.9.7.29-py3-none-win_amd64.whl", markers = 'platform_system == "Windows" and sys_platform == "win32"'} cuda_python = { path = "C:/AI/POETRYGHOST/cuda_python-12.1.0-cp310-cp310-win_amd64.whl" } torch = { path = "C:/AI/POETRYGHOST/torch-2.2.1+cu121-cp310-cp310-win_amd64.whl", markers = 'platform_system == "Windows" and sys_platform == "win32"'} torchdata = { path = "C:/AI/POETRYGHOST/torchdata-0.7.1-cp310-cp310-win_amd64.whl" } xformers = { path = "C:/AI/POETRYGHOST/xformers-0.0.26.dev769-cp310-cp310-win_amd64.whl" } flash_attn = { path = "C:/AI/POETRYGHOST/flash_attn-2.5.2+cu122torch2.2.0cxx11abiFALSE-cp310-cp310-win_amd64.whl" } tensorboard = { path = "C:/AI/POETRYGHOST/tensorboard-2.16.2-py3-none-any.whl", markers = 'platform_system == "Windows" and sys_platform == "win32"'} onnx = { path = "C:/AI/POETRYGHOST/onnx-1.17.0-cp310-cp310-win_amd64.whl", markers = 'platform_system == "Windows" and sys_platform == "win32"'} tensorrt = { path = "C:/AI/POETRYGHOST/tensorrt-8.6.1-cp310-none-win_amd64.whl", markers = 'platform_system == "Windows" and sys_platform == "win32"'} stable_fast = { path = "C:/AI/POETRYGHOST/stable_fast-1.0.4+torch221cu121-cp310-cp310-win_amd64.whl" } [[tool.poetry.source]] [[tool.poetry.source]] [[tool.poetry.source]] [[tool.poetry.source]] [[tool.poetry.source]] [[tool.poetry.source]] [[tool.poetry.source]] [[tool.poetry.source]] [[tool.poetry.source]] [[tool.poetry.source]] [build-system] |
@alihaskar You could install any package using @RGX650 You are installing pytorch using a wheel already in your system, which is already mentioned in the issue OP and does not go through dependency resolution which is the issue many people have here. By the way, this comment is still an issue for pytorch 2.2.2. Resolves properly for pytorch 2.2.1 though, so maybe the pytorch index has regressed again as @dimbleby mentioned here EDIT: Confirmed there are no hashes for 2.2.2 by checking it personally. Has anyone tried using |
lazy wheel cannot help with missing hashes, missing hashes are missing anyway. talking about this here does no good, you should report it to torch |
@PyroGenesis, When removing the cuda libraries my pyproject.toml and using using source "pytorch-cu121", PackageName = { version = "PackageVersion", source = 'pytorch-cu121', markers = "platform_system == 'Windows' and sys_platform == 'win32'" } After poetry install: (envsystem) C:\AI\POETRY\PROJECTS\poetryghost\envsystem\Scripts>poetry show absl-py (envsystem) C:\AI\POETRY\PROJECTS\poetryghost\envsystem\Scripts>pipdeptree black==24.1a1 |
@RGX650 Sorry, I wasn't clear. By "does not go through dependency resolution" I meant resolution of the pytorch package / wheel itself, not of its dependencies. If you try running Also some users are trying to make use of groups and / or extras to have both the CPU and GPU versions of pytorch detailed in their But once you have pytorch package resolved, it's dependencies are relatively smaller and faster to resolve, with or without hashes. So by using a pytorch wheel file directly, you aren't really resolving the pytorch package dynamically, and anyone using your setup will need to have the pytorch wheel file present at the exact path you defined in your |
@PyroGenesis Thank you for your time and explainations :) I don't do poetry add actually.
Then
This way, i skipped using poetry add. I also previously tried different solutions with groups and/or extras in different manners, but it didn't work, and when it did, it seamed like a pretty mess. |
This is working for all platforms ( I need only the CPU, but you can add other logic)
|
@simjak that's great! a bit verbose, but probably the best we can do today. I think that does exactly what we want as long as your GPU machines and non-GPU machines are different platforms, like a macbook (no GPU) and a linux VM (with GPU). But if you develop on say two different amd64 linux machines, one with a GPU and one without, this will still have you pulling in many gigabytes of nvidia stuff on your non-GPU machine. Your code should still run on both machines, just a bit of unfortunate bloat on the non-GPU machine. The big painpoint left for me is CI, where I might be using non-GPU runners and want to install all my dependencies, but don't need CUDA to run tests, formatters, linters, etc. Lots of wasted time pulling in the CUDA stuff on every run. |
Issue
As mentioned in issue #4231 there is some confusion around installing PyTorch with CUDA but it is now somewhat resolved. It still requires a few steps, and all options have pretty serious flaws. Below are two options that 'worked' for me, on Poetry version
1.2.0
.Option 1 - wheel URLs for a specific platform
cu116-cp310-cp310-win_amd64.whl
to see the matches fortorch
,torchaudio
andtorchvision
pyproject.toml
file add the URLs like:poetry update
. It will download a lot of data (many GB) and take quite some time. And this doesn't seem to cache reliably (at least, I've waited 30 minutes+ at 56 Mbps three separate times while troubleshooting this, for the exact same wheels)Note that each subsequent
poetry update
will do another huge download and you'll see this message:Option 2 - alternate source
This seems to have worked (although I already had the packages installed) but it reports errors like
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pillow/
, but I think they get installed anyway (maybe a better message would be "Can't access pillow at 'https://download.pytorch.org/whl/cu116', falling back to pypi")Also, if you later go on to do, say
poetry add pandas
(a completely unrelated library) you'll get a wall of messages like:This happens with or without
secondary = true
in the source config.Maintainers: please feel free to edit the text of this if I've got something wrong.
The text was updated successfully, but these errors were encountered: