New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: tools/toFixed precision #1950
base: main
Are you sure you want to change the base?
Conversation
This reduces performance every time it's called since toString is slow. I also don't see how it fixes #1944. |
@KTibow Thanks for your quick reply! The performance impact is reduced as much as possible by skipping the whole fixing and toString conversion when it is not needed. While agree calling this method can take a bit more time, it is key to SVGO to the fixing correctly. It is unforgiving SVGO can end up with bigger float numbers in output svg files than what came in. A very low As for how it fixes #1944, I experienced cases in which SVGO replaced floats in pathdata like |
In the vast majority of cases, it won't be an integer, so it will still be called.
Can you give me a repro for this?
Don't think it will. In fact const roundAndStringify = (number, precision) => {
if (precision != null) {
number = toFixed(number, precision);
}
return {
roundedStr: removeLeadingZero(number),
rounded: number,
};
}; |
Yet it will still skip the fixing itself when the amount of decimals is within bounds.
Sure! Config being: {
multipass: true,
floatPrecision: 19,
plugins: [
'preset-default'
]
} Input file: <?xml version="1.0" encoding="utf-8"?><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 48 48"><g fill="none" fill-rule="evenodd"><path fill="#FBBC04" fill-rule="nonzero" d="M24,39 L24.0008882,44.4879866 C24.0008882,45.3604302 23.2584362,46.0528855 22.3884924,45.9903895 C11.4841973,45.2029404 2.79225918,36.5035028 2.00730992,25.609207 C1.94231412,24.7367634 2.63476936,23.9943114 3.50721296,23.9943114 L9,24 L24,39 Z"/><path fill="#EA4335" fill-rule="nonzero" d="M24.0008882,9 L24.0008882,3.50063618 C24.0008882,2.62819257 24.7433402,1.93573734 25.613284,1.9982333 C36.5075797,2.78568239 45.2070174,11.48512 45.9944665,22.3794158 C46.0569624,23.2493596 45.3645072,23.9918116 44.4920636,23.9918116 L39,24 L24.0008882,9 Z"/><path fill="#5BB974" fill-rule="nonzero" d="M30.5779623,23.9973355 L34.9999999,19.9999964 L39.7873677,23.9943114 L39.7873677,24.0343088 C39.7873677,32.6251864 32.9087151,39.6086058 24.3583215,39.7776555 L24.0008882,39.7807909 L18.9999999,34.9999964 L23.9998801,30.5713862 C27.6322982,30.5713862 30.5763286,27.6283637 30.5779623,23.9973355 Z"/><path fill="#669DF6" fill-rule="nonzero" d="M24.0008882,8.20783189 L29,13 L23.9994087,17.4168319 L23.7699875,17.4212139 C20.3228427,17.5401494 17.5495252,20.3118234 17.4279689,23.7583018 L17.4234087,23.9978319 L13,29.0000008 L8.21440868,23.9943114 L8.21440868,23.954314 C8.21440868,15.3634364 15.0930613,8.380017 23.6434549,8.21096734 L24.0008882,8.20783189 Z"/><path fill="#1967D2" fill-rule="nonzero" d="M24.0008882,8.20783189 L24.050885,8.20783189 C32.6368238,8.20783189 39.6153045,15.081606 39.7842345,23.6270617 L39.7873677,23.9943114 L30.577963,23.9943114 C30.577963,20.3618933 27.6333063,17.4172366 24.0008882,17.4172366 L24.0008882,8.20783189 Z"/><path fill="#1E8E3E" fill-rule="nonzero" d="M17.4238134,23.9943114 C17.4238134,27.6267295 20.3684701,30.5713862 24.0008882,30.5713862 L24.0008882,39.7807909 L23.9858892,39.7807909 C15.2739523,39.7807909 8.21440868,32.7137477 8.21440868,23.9943114 L17.4238134,23.9943114 Z"/><g fill="#D8D8D8" stroke="#979797" transform="translate(25, 25) rotate(45) translate(-25, -25)translate(21, 21)"><circle cx="3" cy="3" r="2.5"/><circle cx="5" cy="5" r="2.5"/></g></g></svg> Before PR: 1969 characters
I still was able to reduce the file size with the above config. |
Ah, I see. I'm tempted to say that this would be better implemented by just returning the original number or falling back to Speed comparison on some synthetic data w/ jsbench:
Of note, the old method of parsing the number is actually faster than your way of checking the decimal count. String manipulation is expensive!
I see that your change helps in many scenarios where you set floatPrecision 19, but my issue wasn't about the scenario of an explicit float precision or a very high one, it was a simple issue where the fallback was missing, solved by #1945. It's still worth discussing other methods to handle this behavior with extremely high float precisions though. |
Fair. I have updated the PR to just do a check on precision higher than 17. As JavaScript drops any precision at that point anyway. However, this means the issue of toFixed returning more decimals will not be fixed when the precision is set lower than 17. Potentially some option could be added to specifically do a decimal check to squeeze the last bits out of the file. However this can be addressed in another PR.
The String.prototype.toFixed method can at times incorrectly round numbers. For example, 21.0565 is rounded to 21.056 with a precision of 3.
Fair. Just tested with a normal floatPrecision and saw no difference. Updated the PR to reflect this. |
@KTibow Updated the PR to use a variation of the toFixed methods in specific places which is string based. As these situations were already using strings there seems to be no real performance lost. |
@SethFalco Let me know on how to move forward with this one. Or if there alternative paths to improve the precision situation. |
@carlosjeurissen Hey! Thanks for opening the PR, and thanks for your patience! I'll be with you later this week with the review. |
Currently the method meant to reduce the amount of decimals (lib/tools/toFixed) sometimes increases the amount of decimals due to float handling in JavaScript. This PR addresses this by checking the original amount of decimals and comparing it to the requested precision. If it already meets the precision requested, it will simply return the original number.
There are more areas in which float issues appear. Potentially we might want to start using some kind of BigNumber library to tackle this as to be able to do more lossless operations and to allow scale-based optimisations as proposed here: #1270