-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for uint16
, uint32
, and uint64
#58734
Comments
Someone just asked about other uint types on Slack. And from pytorch/vision#4326 (comment):
16-bit image support making There are no plans to work on this issue currently I think, unless more demand materializes. |
I'll add one :) Another tangible need for uint16 support in torchvision is pytorch/vision#4731 We added support for native 16 bits png decoding in torchvision, but we can't make this API public for now, because we output int32 tensors and this wouldn't be compatible with the rest of our transforms. |
Bumping this. My research collaborators and I are working on some cryptographic applications where we could really use |
We have a GIS pipeline that uses image transforms from multiple projects, some of which only support uint, but uint8 leads to too much loss of color information. We could really use uint16 support. |
I would appreciate uint16 support! I'm trying to do NLP stuff with a large dataset of tokens between 0 and 51000, and it's annoying to consume double the storage to keep them as int32s (I'm currently storing them as uint16 via HuggingFace, but I need to load them as NumPy and manually convert them) |
I'm doing work on HDR imaging and we read images from the camera as 16-bit unsigned. It's possible to work around it by using other frameworks but it would be really useful. |
This is exactly the issue me and my team are faced with right now. |
I'm doing work with DICOM data that is often 10 or even 14 unsigned bits. A uint16 would be very nice for these! My work is focused on speed so using the smallest possible datatype would be very appreciated. |
We should add these dtypes, and then build out support via PT2. We probably aren't going to add kernels for everything but Triton makes it very easy to JIT compile these operations. |
@ezyang would Triton be able to enable CPU support? |
Not Triton per se, but we have a CPU inductor backend, so the answer is yes! |
From Triage review: We still need some limited eager support, e.g. factory functions, conversion functions. Also consideration with autocast? (maybe not too bad?) |
Also, bit ops are only well-defined/standardized in CPUs for unsigned dtypes if I understand well: #105465 |
uint16 would also be useful for interop with opencv ( |
Hey, torch.uint16 would be good to encode text into tokens to reduce memory footprint from uint32 when the vocab isn't too big. |
+1, also have a language modelling use case where uint16 could save quite some memory. Would be great to have this. |
I would also appreciate support for uint. Im developing software using for image processing with libtorch as a backend and it would be very useful with support for uint. In particular uint16 but uint32 and uint64 would be nice too. |
A related issue on having uint16 images: |
Some dumb problems we will have to work out.
|
I think it's better to go ahead and have some dtype representations even if meaningful ops are not supported at first and only conversions/casts/reinterprets/restride are implemented: mainly for expected interop and faithfulness of representation. As long as there is a dedicated docs page for that dtype with explained quirks, I think it's fine Same reasoning might apply for the following :) |
Regarding sum, maybe a transitory option might be to require explicit args specifying the out_dtype and acc_dtype? (it would also be nice to elide temporary full upcasting allocations |
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype. Towards #58734 Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype. Towards #58734 Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype. Towards #58734 Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype. Towards #58734 Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype. Towards #58734 Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype. Towards #58734 Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype. Towards #58734 Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: #116594 Approved by: https://github.com/albanD ghstack dependencies: #116698, #116693
Hi guys! I would like to add another usage for uint16 on GPUs, which can be used in designing efficient adaptive sparse optimizers. |
When working with torch, the output is often float32. Torch does not have good support for conversion to uint types: pytorch/pytorch#58734 Support float32 and the signed integer types for convenience.
The array API specification stipulates the data types that we need to support to be compliant. Currently we are missing support for
uint16
,uint32
, anduint64
.cc @mruberry @rgommers @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @gchanan @soumith @ngimel
The text was updated successfully, but these errors were encountered: