Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed integer handling in Decimal::to_f64 #443

Merged
merged 3 commits into from
Nov 25, 2021
Merged

Conversation

anatols
Copy link
Contributor

@anatols anatols commented Nov 3, 2021

We've discovered this issue in to_64, in the code branch that handles decimals with scale 0. The assumption in the original code was that if the value can only be represented in f64 if it can fit into i64. There is no reason for such limit, 96-bit integers can be represented by f64.
(If the intent was to only allow representing integers with no precision loss, which I doubt, then the original code is also incorrect since f64 can only represent integers of up to 53 bits without rounding).

@paupino
Copy link
Owner

paupino commented Nov 20, 2021

Sorry for the delay in review!

I actually think that losing precision, in this case, is a bug as opposed to a feature. The primary purpose for this library has been for use in financial calculations so losing precision would be a little worrying. To be fair converting to a float in a fin app may also be questionable however that doesn't mean it couldn't happen.

Given that, however, I wouldn't be opposed to supporting both scenarios. Perhaps we could support a lossy approach behind a feature such as float-lossy-conversion or something.

I'm hoping to block off some solid time to look into the next release of rust-decimal this week so happy to collaborate on something here if it makes sense.

@anatols
Copy link
Contributor Author

anatols commented Nov 22, 2021

I agree that supporting both scenarios would be the best option. We use rust_decimal in a financial app, and we do have a use case for converting to f64 with precision loss: the values are fed into an algorithm that's business-critical, yet does not require completely precise calculations / inputs.

I'd suggest not hiding it behind a feature though, but rather have two functions. One, to_f64, that would return None if precision cannot be kept, and another, to_f64_lossy, that would be infallible at a price of losing precision.

@paupino
Copy link
Owner

paupino commented Nov 22, 2021

I'd suggest not hiding it behind a feature though, but rather have two functions. One, to_f64, that would return None if precision cannot be kept, and another, to_f64_lossy, that would be infallible at a price of losing precision.

Totally agree. This would be even better! I was also thinking this morning that the scale == 0 scenario is really a premature optimization. I'd be curious how it functions simply by removing that branch. I'll get some time to fiddle with some of this later this week so may try out a few different scenarios.

@paupino
Copy link
Owner

paupino commented Nov 25, 2021

I've been looking into this and am really coming around to the loss of precision being ok. In reality, it's no different to converting to an integer and losing precision that way. If you're really looking at keeping the precision you wouldn't be converting it to an f64 in the first place!
As a litmus test I've just checked other libraries and it appears that loss of precision is expected when converting to float too.

@paupino paupino merged commit 2f2d0fb into paupino:master Nov 25, 2021
@anatols
Copy link
Contributor Author

anatols commented Nov 26, 2021

Thanks for looking into it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants