Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestions for additional floating-point types #2629

Open
aaronfranke opened this issue Jan 26, 2019 · 71 comments
Open

Suggestions for additional floating-point types #2629

aaronfranke opened this issue Jan 26, 2019 · 71 comments
Labels
A-arithmetic Arithmetic related proposals & ideas A-primitive Primitive types related proposals & ideas T-lang Relevant to the language team, which will review and decide on the RFC.

Comments

@aaronfranke
Copy link
Contributor

aaronfranke commented Jan 26, 2019

I noticed that, like other languages, the only floating-point types built-in are f32 and f64. However, only having these can be limiting. I propose adding f128, and as mentioned in this thread f16 would likely be very useful for some workloads.

f128 would not be needed in most programs, but there are use cases for them, and it'd be nice to have it as a language built-in type. RISC-V is able to hardware accelerate them using the Q extension.

f16 is a more efficient type for workloads where you need tons of floats at low precision, like machine learning. Hardware using this is already widespread in Apple's neural engine and in mobile graphics.

Also, if covering IEEE-754 is desired, then there's also f256.

Original text:

I noticed that, like other languages, the only floating-point types built-in are f32 and f64. However, I often have limitations with just these. I propose the following: fsize, freal, and f128

fsize would be like isize but for floats. Basically, use the version that's most efficient for your processor. On modern 64-bit processors with wide FPUs and/or 256-bit SIMD this would become f64.

Sometimes I want to be able to have a variable for real numbers, or I don't know what precision I want yet. In C++ I can do the following to have an abstract precision that I control via compiler flags:

#ifdef REAL_T_IS_DOUBLE
typedef double real_t;
#else
typedef float real_t;
#endif

I propose something similar in Rust, where you can just write freal or something and be able to change the precision later with compiler flags. The default would probably be f32.

Finally, it would be nice to have 128-bit floats (f128) in the language. These are not normally needed, but there are use cases for them, and it'd be nice to have it as a language built-in type. Some newer processors have 512-bit SIMD chipsets that can process these efficiently, though most don't.

If you only implement some of these proposals, that's fine too. Originally posted at rust-lang/rust#57928

@sfackler
Copy link
Member

fsize would be like isize but for floats. Basically, use the version that's most efficient for your processor.

isize is not the integer type that's most efficient for your processors - it's the integer type that's the same size as a pointer. It's like ptrdiff_t, not int.

I propose something similar in Rust, where you can just write freal or something and be able to change the precision later with compiler flags. The default would probably be f32.

#[cfg(feature = "real_t_is_double")]
type real_t = f64;
#[cfg(not(feature = "real_t_is_double")]
type real_t = f32;

@jonas-schievink jonas-schievink added the A-primitive Primitive types related proposals & ideas label Jan 26, 2019
@Centril Centril added T-lang Relevant to the language team, which will review and decide on the RFC. A-arithmetic Arithmetic related proposals & ideas labels Jan 27, 2019
@moonheart08
Copy link

A better suggestion would be f16 support, as it is common in graphics.

@shingtaklam1324
Copy link

@moonheart08

Are f16 used much in intermediate calculations? I know it is used commonly as a storage format, but that last time I checked this (I wrote a Pre-RFC on this on internals a while back, but I'm a bit fuzzy on the details), a lot of the calculations involving f16 on most platforms is done by casting to f32, performing the op, the cast back to f16. If that is the case then having native f16 support may not be that important.

Adding the ability to use the F16C instructions may be useful to have in core::arch though, perhaps something like __m128h which has 8 "f16"s.

@Coder-256
Copy link

Coder-256 commented Mar 3, 2019

How about long double and 128-bit floats? I could be wrong, but I'm 99% sure that we currently unavoidably lose precision when using long doubles from C. On my computer (macOS), bindgen outputs f64, but sizeof(long double) in C outputs 16 bytes. (128 bits; for alignment I guess?).

(On a side note, is that even safe behavior? What about C functions that take long double *?)

@aaronfranke
Copy link
Contributor Author

@Coder-256 In C++, long double is 64-bit on Windows, 80-bit in MinGW, and 128-bit on Mac and Linux (probably indeed for alignment, as I don't think anyone implements it as quadruple precision).

@Coder-256
Copy link

@aaronfranke Could you please clarify what you mean? What I was trying to say is that Rust currently does not have any support for floats larger than 64 bits (8 bytes), for example, long double on certain platforms. I was also trying to point out that in addition to having limited precision within Rust code, this makes it difficult to interact with native code that uses large floats, such as using FFI with C code that uses floats larger than 64 bits.

There was also a separate issue with bindgen that caused float sizes to be incorrect for large floats, but that has been fixed (in rust-lang/rust-bindgen@ed6e1bb).

@aaronfranke
Copy link
Contributor Author

I wasn't disagreeing with you, I was just adding information. Sorry if I wasn't clear. f128 would be great.

@Coder-256
Copy link

@aaronfranke I absolutely agree, both f128 and f80 would be very useful, especially for FFI (for example, Swift already has Float80 mainly for communicating with old C code, just an example to show how it could help)

@lygstate
Copy link

old things never be gone, I wanna push this. rust is a system language not a script language, need compat old things.

@lygstate
Copy link

I wanna push add support for fp80 and fp128... any help need?

@lygstate
Copy link

Like rust-lang/rust#38482 does

@thomcc
Copy link
Member

thomcc commented Oct 15, 2020

Basically, use the version that's most efficient for your processor. On modern 64-bit processors with wide FPUs and/or 256-bit SIMD this would become f64.

Even on modern x86 which has similar or equal speed between most f32 and f64 ops, f32 is still very much the fastest for your processor because it cuts cache misses in half.

Sometimes I want to be able to have a variable for real numbers, or I don't know what precision I want yet. In C++ I can do the following to have an abstract precision that I control via compiler flags:

#[cfg(real_is_f64)]
type real = f64;
#[cfg(not(real_is_f64))]
type real = f32;

then you can control via RUSTFLAGS="--cfg real_is_f64" (you can also use cargo features, but they're not a great fit for cases where enabling a feature can cause compile errors like this)

... Regarding suggestions of f80

What would f80 do on platforms that aren't x86? Noting else has native 80bit floats. It's not even part of IEEE754 (even though it's largely natural extension of it... although it has a lot of quirks). This is something that would be viable in core::arch::{x86,x86_64} but isn't portable. We don't want to have to implement these as software floats on other platforms.

I'd be in favor of a std::os::raw::c_long_double type but it would have to be carefully designed. Note that PPC's long double is exceptionally cursed, as it's a pair of doubles that are summed together...

I'd be in favor of f16, and tentatively f128 since binary128 is part of IEEE754 2019, at least.

EDIT: I hadn't noticed that sfalker said the exact same thing as my first point >_>

@lygstate
Copy link

lygstate commented Oct 15, 2020

Basically, use the version that's most efficient for your processor. On modern 64-bit processors with wide FPUs and/or 256-bit SIMD this would become f64.

Even on modern x86 which has similar or equal speed between most f32 and f64 ops, f32 is still very much the fastest for your processor because it cuts cache misses in half.

Sometimes I want to be able to have a variable for real numbers, or I don't know what precision I want yet. In C++ I can do the following to have an abstract precision that I control via compiler flags:

#[cfg(real_is_f64)]
type real = f64;
#[cfg(not(real_is_f64))]
type real = f32;

then you can control via RUSTFLAGS="--cfg real_is_f64" (you can also use cargo features, but they're not a great fit for cases where enabling a feature can cause compile errors like this)

... Regarding suggestions of f80

What would f80 do on platforms that aren't x86? Noting else has native 80bit floats. It's not even part of IEEE754 (even though it's largely natural extension of it... although it has a lot of quirks). This is something that would be viable in core::arch::{x86,x86_64} but isn't portable. We don't want to have to implement these as software floats on other platforms.

I'd be in favor of a std::os::raw::c_long_double type but it would have to be carefully designed. Note that PPC's long double is exceptionally cursed, as it's a pair of doubles that are summed together...

I'd be in favor of f16, and tentatively f128 since binary128 is part of IEEE754 2019, at least.

We have a fact that f80 are broadly used, and in forseable future, that's will continue. We have no need a soft f80 impl, just make on x86 platfrom f80 works is enough. Anyway a soft f80 may be a better option for cross platform consideration.

@programmerjake
Copy link
Member

several architectures have hardware support for f128: RISC-V, PowerPC, s390, and probably more.

@lygstate
Copy link

lygstate commented Oct 15, 2020

several architectures have hardware support for f128: RISC-V, PowerPC, s390, and probably more.

For platform have f128, implmenet f80 would not cause significant performance down

@aaronfranke
Copy link
Contributor Author

aaronfranke commented Oct 15, 2020

@thomcc These are all ideas, not everything in the OP is relevant anymore since it has been discussed. I think fsize and freal have been discussed and dismissed, fsize is a bad idea considering the information in this thread and freal is indeed easy to implement with a small amount of lines of code so it doesn't need to be in the language.

That said, f128 is still desired for sure and has some use cases and some hardware support, f80 would be neat though I wouldn't use it personally, f16 would be useful especially in the context of low-end graphics though I also wouldn't use this myself, and if your goal is to cover IEEE 754 there is also f256 or octuple precision, though it's rare to see.

@lygstate
Copy link

@thomcc These are all ideas, not everything in the OP is relevant anymore since it has been discussed. I think fsize and freal have been discussed and dismissed, fsize is a bad idea considering the information in this thread and freal is indeed easy to implement with a small amount of lines of code so it doesn't need to be in the language.

That said, f128 is still desired for sure and has some use cases and some hardware support, f80 would be neat though I wouldn't use it personally, f16 would be useful especially in the context of low-end graphics though I also wouldn't use this myself, and if your goal is to cover IEEE 754 there is also f256 or octuple precision, though it's rare to see.

may be we can add f16 f80 and f128 in a single shot?

@workingjubilee
Copy link
Contributor

f16 has uses in neural networks as well.

There are actually many problems with using f80, especially if we do not ship a soft float to cover it... it would not be a type defined by an abstraction, frankly, it would be a type defined by Intel's hardware quirks, and we would only be adding more on top of it. One of the nice things about Rust is that it is highly portable right now, so I do not think it makes sense to add such a non-portable type to the language and limit portability that much, though a language extension that makes it simpler to define and use such a non-portable type would make sense.

@thomcc
Copy link
Member

thomcc commented Oct 25, 2020

several architectures have hardware support for f128: RISC-V, PowerPC, s390, and probably more.

I can't say for sure about the other arches, but PowerPC's is not IEEE-754-like at all — it's double-double. It would not help for implementing a sane f128 nor would it help implement a f80.

For platform have f128, implmenet f80 would not cause significant performance down

I don't think this is really true (we can quibble over significant, I guess), but regardless rust doesn't exclusively target architectures in the sets {have native f80}, {have native f128}, so something that solves this for other architectures needs to be considered.

if your goal is to cover IEEE 754 there is also f256 or octuple precision, though it's rare to see.

I mean, it's not mentioned in IEEE-754 2019. It's not hard to imagine what it looks like, admittedly.


Anyway, I think once inline asm is stable someone who really wants f80 could implement it as a library on x86/x86_64. This wouldn't solve the issue of FFI (e.g. a c_long_double type), which I still think would be nice to solve, but I think has a lot of different design considerations, could just be a mostly-opaque type that includes little more than implementations of From<f64>/Into<f64> (e.g. no arithmetic).

@programmerjake
Copy link
Member

@thomcc

several architectures have hardware support for f128: RISC-V, PowerPC, s390, and probably more.

I can't say for sure about the other arches, but PowerPC's is not IEEE-754-like at all — it's double-double. It would not help for implementing a sane f128 nor would it help implement a f80.

You're thinking of C's long double type; PowerPC does support IEEE-754 standard binary128 FP using new instructions added in Power ISA v3.0.
Quoting GCC 6's change log:

PowerPC64 now supports IEEE 128-bit floating-point using the __float128 data type. In GCC 6, this is not enabled by default, but you can enable it with -mfloat128. The IEEE 128-bit floating-point support requires the use of the VSX instruction set. IEEE 128-bit floating-point values are passed and returned as a single vector value. The software emulator for IEEE 128-bit floating-point support is only built on PowerPC GNU/Linux systems where the default CPU is at least power7. On future ISA 3.0 systems (POWER 9 and later), you will be able to use the -mfloat128-hardware option to use the ISA 3.0 instructions that support IEEE 128-bit floating-point. An additional type (__ibm128) has been added to refer to the IBM extended double type that normally implements long double. This will allow for a future transition to implementing long double with IEEE 128-bit floating-point.

@thomcc
Copy link
Member

thomcc commented Oct 26, 2020

Thanks, you're correct that I was thinking of the PPC long double (__ibm128) type. Unfortunately, I think the existence of 2 separate 128-bit "floating point" types on powerpc only complicates things, although it's nice that at least one of them is moderately sane.

@eprovst
Copy link

eprovst commented Nov 6, 2020

Full(er) support for IEEE 754 would indeed be very welcome, especially for numerical work.

What would f80 do on platforms that aren't x86? Noting else has native 80 bit floats. It's not even part of IEEE 754 (even though it's largely natural extension of it... although it has a lot of quirks).

This is somewhat false, x86's 80-bit floats are extended precision binary64's as specified by IEEE 754.

However it's true that these are not very strictly defined, an extended precision binary64 has to have a larger precision than binary64 and the exponent range of binary128. This means that both x86's 80-bit floats and binary128 are examples of valid extended precision binary64's.

I'd suggest providing the following types:
f16 (binary16), f32 (binary32), f64 (binary64), f64e (binary64 extended) and f128 (binary128).

On x86 platforms, and others that have a native extended precision binary64, a f64e would be an 80-bit float or similar, on all others it would be the same as a f128.

[Edit: further clarified in the relation between 80-bits floats and IEEE 754.]

@workingjubilee
Copy link
Contributor

So, on the other side of "portable" is "layout". We have a lot of ambiguous-layout types which are not primitive types. However, as far as I am aware all the primitive types have a pretty explicit layout, and many of the std composite data types like Vec etc. have most of their layout dialed in as well. Here we'd have two possible layouts on a numeric type which should be as simple as possible, andf64e is probably the wrong abstraction here because there's a lot of cases where someone wants "type N that fulfills X or else type M that fulfills a superset of X", especially for math libs.

@eprovst
Copy link

eprovst commented Nov 7, 2020

I'm not too sure what you mean by 'layout' in this case, it's true that extended precision floats do not have to conform to a certain bit format. If you refer to the memory layout of complex data types, I'm not sure if there are any guarantees here anyway as I wouldn't be surprised optimisation passes can and do change these kinds of layouts.

I didn't give much thought to the syntax of f64e, something like ExtendedPrecision<f64> might indeed be the better choice here, which also neatly extends to the other fxx's.

Most do seem to agree on including all the common IEEE 754 types, which is, I think, the main goal of this issue. Something similar to Fortran's selected_real/integer_kind could also be looked at, but should probably be moved to another issue.

I'd have to check Rust's current support for other parts of IEEE 754 first. There are very few languages with good support for the hardware's capabilities in this area and those that do tend to be rather unsafe. Numerical analysis and other scientific computing do seem to be a great fit for Rust, so I think it's worth looking into this.

[Edit: typos and clarification]

@programmerjake
Copy link
Member

I would expect f64e to be directly equivalent in bit representation, ABI, and layout to C/C++'s long double except in cases like MSVC on x86_64 where they pick long double == double even though f80 is still usable from a hardware level. There would be another type alias c_long_double for exact equivalence to long double on all platforms with an ABI-compatible C compiler and when the long double type is supported by Rust (so, probably excluding PowerPC's annoying double-double type for the MVP).

One interesting side-note: PowerPC v3.0 includes an instruction for converting float types to f80, though I think that's the only supported operation.

f128 would be directly equivalent to gcc/clang's __float128 type where supported.

@programmerjake
Copy link
Member

One interesting side-note: PowerPC v3.0 includes an instruction for converting float types to f80, though I think that's the only supported operation.

Turns out that the only supported f80 operation is xsrqpxp, which rounds a f128 to a f80 but leaves it in f128 format, that's useful for implementing f80 arithmetic operations, since, for all of add, sub, mul, div, and sqrt, if all inputs are known to be f80 values in f128 format, then you can produce the exact result f80 value in f128 format by:

  1. run the add, sub, mul, div, or sqrt operation for f128 in round to odd mode
  2. run the xsrqpxp instruction in the desired rounding mode for the f80 operation

This is similar to how f32 arithmetic can be implemented in JavaScript (which only has the f64 type for arithmetic) by rounding to f32 between every operation.

@eprovst
Copy link

eprovst commented Nov 14, 2020

[...] that's useful for implementing f80 arithmetic operations [...]

No need to, ExtendedPrecision<f64> would simply be f128 on targets that do not have a native extended double format.

In many languages computations with floating point numbers aren't guaranteed to be identical on different targets. On x86_64, for instance, doubles were/are often stored in 80-bit registers, it's only when they are written to memory that they are truncated to 64 bits. In strict mode the JVM thus has to write every floating point value back to memory between operations to guarantee identical results on different architectures.

[Edit: formulation was ambiguous.]

@tgross35
Copy link
Contributor

tgross35 commented May 4, 2022

Just asking as a curious observer - has an official RFC for this gotten any movement? The only pre-rfc I can find is this one which has been long closed.

Recently stumbled into the pain of varying double/long double support in c and was wondering if rust outdoes it

@hamdav
Copy link

hamdav commented Aug 6, 2022

I would just like to say that I would love to have f128 support in rust as well. It can be useful, and even necessary, for some scientific computations.

@VariantXYZ
Copy link

Opened an issue without realizing f16 was covered here.

I think there are plenty of reasons to support f16 as a native arithmetic type in Rust, but my primary use-case is ML inference for hardware that supports fp16 arithmetic (e.g. the Cortex-a55).

I've resorted to writing simple functions (multiply - add, dot products, etc...) to operate on _Float16 values in C and calling them because the half crate's conversion cost is really painful for anything low-latency/high frequency (audio processing). It is... far from efficient.

My understanding is that _Float16 is a portable arithmetic type, defined in the C11 extension ISO/IEC TS 18661-3:2015), so it would be nice if Rust exposed something similar.

@aaronfranke
Copy link
Contributor Author

aaronfranke commented Dec 24, 2022

On the topic of hardware support, I'll add that RISC-V's Q extension provides quadruple-precision floats, so if Rust added f128 then it could be hardware accelerated on those systems, for example rv64gqc systems.

Even without hardware acceleration, it would still be useful to have this much precision available via software emulation at the language and standard library level.

@duplexsystem
Copy link

'f16' and 'fp128' would be particularly useful in combination with std:simd, especially for running rust on things like a GPU.

@rdrpenguin04
Copy link

f128 would be useful for calculators that require advanced precision; I'm actually blocked on such a type existing at a critical point in development.

@programmerjake
Copy link
Member

f128 would be useful for calculators that require advanced precision; I'm actually blocked on such a type existing at a critical point in development.

if you need high precision floats but can't wait, you can use https://docs.rs/rug/1.19.2/rug/struct.Float.html

@tgross35
Copy link
Contributor

At this point I think this more or less just needs a RFC, right? (No I am not volunteering to write it)

I think that most everyone here would be on board with a minimal implementation like this:

/// Available on platforms that support f16
/// ARM and AArch64 have this to my knowledge with __fp16
#[cfg(target_has_f16)]
f16

/// Available on platforms that support true 128-bit floats
#[cfg(target_has_f128)]
f128;

/// Exact semantics of c `long double`
core::ffi::c_longdouble;

/// ...maybe? I don't know enough about it
core::ffi::c_bfloat16;

And just not supporting 80-bit fake f128 as a native rust type, only via c_longdouble.

Not positive about f16 as it hasn't been discussed much here. Apple's M1 is the only mainstream platform I know of that has a half precision unit in CPU

@aaronfranke
Copy link
Contributor Author

C long double is not suitable to use as-is because it is usually not 128-bit. For example it's 64-bit on Microsoft platforms, equivalent to double. I don't think Rust has to depend on C's types, right?

@tgross35
Copy link
Contributor

Not depend on, but be able to interface with. E.g. core::ffi::{c_long, c_ulong} are defined as i32/u32 on Windows and i64/u64 on linux.

So providing c_longdouble would give a way for rust <-> C FFI, plus provide an escape hatch for platforms that have 80 bit precision but not true 128. Good point about microsoft though - what is their 128 bit type?

@aaronfranke
Copy link
Contributor Author

GCC and Clang/LLVM support __float128 as a compiler-specific keyword for 128-bit floats. As far as I know MSVC does not support 128-bit floats at all, so you have to use a library if you want 128-bit floats. There isn't yet a standardized keyword in C for 128-bit floats (and as mentioned long double won't work).

@programmerjake
Copy link
Member

There isn't yet a standardized keyword in C for 128-bit floats (and as mentioned long double won't work).

afaict the standardized C keyword is _Float128: https://en.cppreference.com/w/cpp/types/floating-point

@tgross35
Copy link
Contributor

I was imagining that target_has_f128 would only be true if (1) there is a __float128 or similar, and (2) it is truly f128 (not 80 bit). So in this case, you just couldn't use f128 on Windows with MSVC if it doesn't support it - similar to the target_has_atomic gates.

@tgross35
Copy link
Contributor

tgross35 commented Apr 19, 2023

The wiki page actually has a good summary on this, under the "computer language support" page. I suppose that f128 could be allowed anywhere that C's _Float128 is available, with the same rules/targets. Same story for f16/_Float16 (thanks Jake for pointing out the canonical names)

I guess the use of c_longdouble in an initial RFC would be debatable since it would mean a possible 80bit float that's otherwise not available - but something is still kind of needed for FFI. Maybe c_longdouble could only be available on targets where it cleanly maps to f64 or f128?

I think somebody just needs to take a stab at writing an RFC and trim out features that can't reach consensus. I haven't seen anybody arguing things like "rust should not have a f128/f16 type at least on platforms that support real f128/f16", so it's just a matter of structuring the details into something better than a long github issue discussion :)

You've been super active on this issue for 4 years @aaronfranke - do you maybe want to write it up? Should be a pretty easy RFC. Just fill out this template https://github.com/rust-lang/rfcs/blob/master/0000-template.md and create a PR to that repo (no pressure of course)

@tgross35
Copy link
Contributor

tgross35 commented Apr 19, 2023

Unfortunate note - I think the LLVM bug that causes this rust-lang/rust#54341 might be applicable here too. But this doesn't change anything with respect to next steps edit: I take this back - seems like this might actually not be a problem because it seems to only affect llvm integers, based on the LLVM patch submissions

@ecnelises
Copy link
Contributor

ecnelises commented Apr 20, 2023

Some questions/notes regarding float128 (also applied to float16):

Which 'long' float type should be supported?

According to LLVM, there're three float types longer than double in IR: fp128, ppc_fp128, x86_fp80. Only the first is IEEE-comformant, while ppc_fp128 is legacy format and only available on PowerPC (see my answer on StackOverflow), and x86_fp80 is also non-standard and only available on x86.

__float128 is available on i386, x86_64, IA-64, and hppa HP-UX, as well as on PowerPC GNU/Linux targets that enable the vector scalar (VSX) instruction set. __float128 supports the 128-bit floating type. On i386, x86_64, PowerPC, and IA-64 other than HP-UX, __float128 is an alias for _Float128. On hppa and IA-64 HP-UX, __float128 is an alias for long double.

__float80 is available on the i386, x86_64, and IA-64 targets, and supports the 80-bit (XFmode) floating type. It is an alias for the type name _Float64x on these targets.

__ibm128 is available on PowerPC targets, and provides access to the IBM extended double format which is the current format used for long double. When long double transitions to __float128 on PowerPC in the future, __ibm128 will remain for use in conversions between the two types.

GCC Document 'Floating Types'

Although many targets still does not support fp128, we should use this one as primitive f128 type to avoid ambiguity, and leave the rest two into arch or other target related parts.

Which targets should support f128

Very few architectures support hardware instructions for IEEE float128 (I can only recall PowerPC after Power9?), but compiler-rt provides a complete set for lowering these float operations, as long as the target ABI accepts such 'type' (like storing it in 128-bit vector register). Not only by architecture, but also by OS, vendor, or ABI.

In clang, not every target with such width type can accept __float128, if forcing to emit fp128 in IR, backend may crash and interop with C may be broken. So let every target decide it.

// clang f128.c -S -emit-llvm -O -o - -mfloat128 -target <YOUR_TARGET>
__float128 foo(__float128 a, __float128 b) { return a + b; }

And yes, we need something like #[cfg(target_has_f128)] to guard it.

Inter-op with C

We need to define separate types respective to __float128 _Float128 long double in libc. (their meaning may vary between targets!)

It should be specially noted that definition of long double is really a mess. In clang/GCC, options can be used to control semantics of long double, like -mlong-double-64 and -mabi=ieeelongdouble (on PowerPC). We need to use or create cfg directives to differentiate different case, carefully.

Backend support

I believe LLVM and GCC support the additional types well. But cranelift does not support them: https://github.com/bytecodealliance/wasmtime/blob/main/cranelift/docs/ir.md#floating-point-types . Will this be an issue?


I'd like to write such an RFC as my first one if no other volunteer raises hands. :)

@tgross35
Copy link
Contributor

tgross35 commented Apr 20, 2023

Although many targets still does not support fp128, we should use this one as primitive f128 type to avoid ambiguity, and leave the rest two into arch or other target related parts.

...

but compiler-rt provides a complete set for lowering these float operations, as long as the target ABI accepts such 'type' (like storing it in 128-bit vector register). Not only by architecture, but also by OS, vendor, or ABI.

This all sounds totally reasonable 👍

I believe LLVM and GCC support the additional types well. But cranelift does not support them: [link]. Will this be an issue?

I don't think that any sort of decisions like this are typically blocked on cranelift support. I suspect they will quickly add support if they know Rust will soon have f128 and f16, but probably just haven't had a reason to do it until now.

I'd like to write such an RFC as my first one if no other volunteer raises hands. :)

Go for it! It sounds like you certainly have the knowledge to write it. It's a team effort anyway, you can create a draft PR as soon as you have some of it typed up, and everyone can help finish/polish it (link it here whenever you do)

@aaronfranke
Copy link
Contributor Author

aaronfranke commented Apr 20, 2023

Which 'long' float type should be supported?

f128 would be IEEE quadruple-precision only. The behavior MUST be standardized and consistent between architectures. It should NOT use 80-bit floats or double-doubles ever.

Very few architectures support hardware instructions for IEEE float128

RISC-V has native hardware support for IEEE quadruple-precision 128-bit floats via the Q extension. Most other CPU architectures will need to use software emulation; I would look into how C/C++ _Float128 does it.

It should be specially noted that definition of long double is really a mess. ... We need to use or create cfg directives to differentiate different case, carefully.

There is no configuration, the rule should be that we forget about long double. As mentioned above it's not a useful type because it's not consistent across target platforms. It's out-of-scope for a f128 RFC proposal.

I'd like to write such an RFC as my first one if no other volunteer raises hands. :)

That would be great! (also small note, my text above isn't disagreeing with you, just clarifying)

@thomcc
Copy link
Member

thomcc commented Apr 20, 2023

Note that compiler-rt isn't always available, so we'd still have to port implementations of these for compiler-builtins. It would be good to avoid more situations where this is assumed, which causes significant pain.

@lygstate
Copy link

Which 'long' float type should be supported?

f128 would be IEEE quadruple-precision only. The behavior MUST be standardized and consistent between architectures. It should NOT use 80-bit floats or double-doubles ever.

Very few architectures support hardware instructions for IEEE float128

RISC-V has native hardware support for IEEE quadruple-precision 128-bit floats via the Q extension. Most other CPU architectures will need to use software emulation; I would look into how C/C++ _Float128 does it.

It should be specially noted that definition of long double is really a mess. ... We need to use or create cfg directives to differentiate different case, carefully.

There is no configuration, the rule should be that we forget about long double. As mentioned above it's not a useful type because it's not consistent across target platforms. It's out-of-scope for a f128 RFC proposal.

I'd like to write such an RFC as my first one if no other volunteer raises hands. :)

That would be great! (also small note, my text above isn't disagreeing with you, just clarifying)

long double is for ABI compat with libc only. Maybe we only neede is the convert function between long double and f128 and implement full functional in f128, I think that's would be enough. Even though have performance hurt on x86 or ppc double double, but that at least we won't have linkage error

@ecnelises
Copy link
Contributor

ecnelises commented May 18, 2023

Hi there, here is my initial draft for the planned RFC: https://github.com/ecnelises/rust-rfcs/blob/additional-floats/text/0000-additional-float-types.md

I know the author has chance to revise it before the decision period, but I’d like to gather some basic comments before it goes into a real RFC, since this is my first attempt to write a Rust RFC. Thanks!

@tgross35
Copy link
Contributor

tgross35 commented May 18, 2023

@ecnelises are you able to open a PR to the RFC repo? It's easier to provide feedback that way.

Quick review:

  • The reference-level explanation section could use more detail on e.g. C interop and architecture support. Specifically things like "f128 will be available on architectures that have hardware support or can emulate true IEEE-xxx 128-bit floats without using 80-bit extended precision, including: ...". I'd also break up the "reference-level explanation" section into f16, f64, and ffi components to help keep things organized.
  • Specifying core::fii::c_longdouble (only when it's exactly f64 or f128) is useful - these varying types are exactly what core::ffi is for.
  • I'm not sure whether any support of f80 type is desirable: that is, does Rust really gain much from adding it as part of the minimum viable product? Imho this could go under "future possibilities"
  • Things like the From/TryFrom implementations are often easier to understand as a code snip rather than paragraph form, e.g.
impl From<f16> for f128 { /* ... */ }
impl From<f32> for f128 { /* ... */ }
impl From<f64> for f128 { /* ... */ }
impl From<i8> for f128 { /* ... */ }
// ...

But again, easier to shape these things once you open a PR. Thanks for putting it together!

@aaronfranke
Copy link
Contributor Author

aaronfranke commented May 18, 2023

@ecnelises Looks nice to me overall, I did spot one grammar mistake:

Implementing in Rust compiler help to maintain a stable codegen interface. To fix: help -> helps

Also I agree with @tgross35 on mentioning architectures, for example we can mention RISC-V which has support for 128-bit quadruple-precision floats via the Q extension (without the Q extension, emulation would be required).

For the discussion of what to put in this RFC or what to leave out (@tgross35 mentioned maybe leaving out f80 but including core::fii::c_longdouble), I'm not sure, but it is indeed an important discussion. I would tend towards prioritizing IEEE standards over language-specific and architecture-specific formats.

For the drawbacks section, note that not all architectures support f32 and f64 natively either. For example RISC-V without the F or D extensions does not support those formats respectively. So however Rust handles f32 and f64 on that architecture will be very similar to how it will handle f128 without the Q extension.

@tgross35
Copy link
Contributor

@ecnelises would you mind opening a PR to this repo with your draft? You can create it as a draft PR to indicate that it isn't ready for final review.

Even if you haven't had the time to work on it (completely understandable), I think it would be good for everyone to start thinking about it, and to provide reviews with suggested changes.

@ecnelises
Copy link
Contributor

@ecnelises would you mind opening a PR to this repo with your draft? You can create it as a draft PR to indicate that it isn't ready for final review.

Even if you haven't had the time to work on it (completely understandable), I think it would be good for everyone to start thinking about it, and to provide reviews with suggested changes.

Sorry for the delay! I've created #3451 . Although I still have some changes not finished, I'd be glad to see more comments and revise it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-arithmetic Arithmetic related proposals & ideas A-primitive Primitive types related proposals & ideas T-lang Relevant to the language team, which will review and decide on the RFC.
Projects
None yet
Development

No branches or pull requests