Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

throw error "SyntaxError: Parenthesis ) expected (char 102)" #1485

Open
SKT1yang opened this issue Apr 23, 2019 · 12 comments
Open

throw error "SyntaxError: Parenthesis ) expected (char 102)" #1485

SKT1yang opened this issue Apr 23, 2019 · 12 comments

Comments

@SKT1yang
Copy link

SKT1yang commented Apr 23, 2019

I am throwing SyntaxError: Parenthesis )expected (char 102) when using math.eval, And I found that when calculating the same expression, the scope variable parameter is 'complex' when it is reported, the 'simple' is normal; for example: "1.5/18.5" is calculated normally to get '0.081081081', "1.39/ 18.6" is reported. The specific situation is as follows:

expression=math.eval(formul, data),

form="bignumber((hs1-hg1)/(bignumber(hg1-hm1)))"), 
hs1=40,
hg1=38.6, 
Hm1=20

Edited: to make formatting clearer :)

@harrysarson
Copy link
Collaborator

Hi, I tried running this version of your example (https://runkit.com/harrysarson/mathjs-1485):

const math = require('mathjs');

const form="bignumber((hs1-hg1)/(bignumber(hg1-hm1)))";
const hs1=40;
const hg1=38.6;
const hm1=20;
const data = { hs1, hg1, hm1 };
const expression = math.eval(form, data);

I got this error:

TypeError: Cannot implicitly convert a number with >15 significant digits to BigNumber (value: 1.3999999999999986). Use function bignumber(x) to convert to BigNumber.

Which seems reasonable to me. Could you share how you got the error you are reporting?

@gwhitney
Copy link
Collaborator

gwhitney commented Oct 6, 2023

I just checked, and the behavior now remains exactly as documented in the last post from 4 and a half years ago. But I have to disagree. It does not make any sense for mathjs to refuse to convert a number to a bignumber because it has "too many significant digits" when those digits were only fabricated from roundoff error in the subtraction (of hs1-hg1 in the numerator). So I would consider this a bug, myself, and something that should ultimately be corrected. It's even vaguely conceivable that the evaluation should realize it is going to have to convert the numerator to bignumber sooner or later, and do it really sooner, i.e., convert both hs1 and hg1 to bignumbers before doing the subtraction. Then the numerator will be precisely 1.4 (as it "should" be) for the division. I don't know how serious that suggestion is; maybe if that kind of behavior is desired, mathjs should be configured to default to bignumbers... In any case, I definitely feel there is undesirable behavior here that should be addressed.

@josdejong
Copy link
Owner

josdejong commented Oct 11, 2023

A few thoughts:

  1. I think the basic rule of throwing an error when creating a BigNumber from an irrational number "a number which contains round off errors already" is a good one: it prevents getting the illusion of a high precision BigNumber result when the operation is mixing numbers and BigNumbers and actually is low precision.
  2. I think mixing numbers and BigNumbers in operations is simply tricky (just like implicit multiplication is tricky by nature). You can easily work with BigNumbers all the way by instantiating all constants as BigNumbers upfront and preventing these kind of issues.
  3. Glen your idea of making evaluation smarter by detecting that the operation will result in a BigNumber and then converting numbers into BigNumbers upfront is interesting but I'm afraid that it will be very complex and hard to implement. I can imagine some preprocessing step when using math.evaluate, but when using plain JavaScript functions like math.divide and math.subtract, the behavior should be the same.
  4. It would be nice if mathjs helps with this. When configuring math.config({number: 'BigNumber'}) I would love mathjs to evaluate math.add(0.1, 0.2) by first converting the numbers into BigNumbers and then adding them. I think that would solve this issue too. This is a long time wish, discussed in Apply BigNumber config to functions too (currently it only applies to eval) #2734

@gwhitney
Copy link
Collaborator

Hmm, as to these points:

  1. It's not just "irrational" numbers -- it's anything (like 1/3 or the imprecise result of 40 - 38.6) that doesn't have a decimal terminating in 0s before 15 digits, which is a weird criterion for a binary representation (since there is no precise conversion from binary to decimal). So it definitely seems as though the criterion for automatic conversion could and should be improved. In fact, I have a concrete proposal: the automatic conversion should be made for any number that is "within config.epsilon of a rational number whose denominator is less than 0.001/config.epsilon" -- such numbers are relatively easy to recognize. That would allow automatic conversion of both 1/3 and 40-38.6, which is probably what's desired. The only question remaining is when you do the conversion of such numbers, do you round the value to that nearby rational number? I'd say yes, because you're never introducing more than config.epsilon of error, and much of the time, you are actually correcting an error introduced by the base mismatch.

  2. Well, the number -> BigNumber conversion is mostly weird because correct me if I am wrong, number is a binary representation and BigNumber is explicitly a decimal representation, which is pretty unusual for computing. I think substituting an explicitly binary-represented implementation of high-precision arithmetic would mean there would be zero need to for any constraint on converting number->BigNumber, as the conversion would be completely lossless. That would probably be mathematically more consistent, but would have a cost in terms of human interaction with BigNumber, as we tend to operate in decimal. In light of that, my guess is you'd like to keep BigNumber as a decimal rep, in which case just switching the constraint as mentioned in (1) just above should reduce the weirdness a lot.

  3. Yeah, my suggestion wasn't entirely serious. I think you're right, it would entail changing mathjs to symbolic computation at its core, so that a tree of calls like math.multiply(math.add(x,y), math.subtract(x,y)) would really generate a "parse tree" just like math.parse('(x+y)*(x-y)') which could then (possibly implicitly) be evaluated. There are other math packages that go that route, but it does look like too big a change for mathjs.

  4. Commented on this in the relevant discussion.

@josdejong
Copy link
Owner

About (1) and (2): Sorry for the confusion about "irrational" numbers. I mean "a number of which we know it contains round-off errors already". Let me try to explain this better. My main concern here is to prevent people from thinking they are working at high precision whilst there is a part of the expression executed with a low precision. My concern is not about internal representation of binary vs decimal, but end users mixing number (~15 digits) and BigNumber (~64 digits) in a single expression, and thinking they have a high precision result (~64 digits), and then relying on that. For example:

// in the following case mathjs gives an annoying but helpful error:
math.config({number: 'number'})
console.log(math.evaluate('sin(pi/x)', { x: math.bignumber(2) }).toString())
// TypeError: Cannot implicitly convert a number with >15 significant digits to BigNumber (value: 3.141592653589793)

// this is to prevent against the following case:
math.config({number: 'number'})
console.log(math.evaluate('sin(bignumber(pi)/x)', { x: math.bignumber(2) }).toString())
// BigNumber 0.9999999999999999999999999999999928919459638323580402641578465819
// Whoops! Actual precision is NOT around 64 digits because we convert an ~15 digit version of pi to BigNumber

// if we do all at high precision we're good to go:
math.config({number: 'BigNumber'})
console.log(math.evaluate('sin(pi/x)', { x: math.bignumber(2) }).toString())
// BigNumber 1
// Ahhh, better :)

When working with BigNumbers, mathjs is configured by default to work with 64 digits precision. When the output is a BigNumber, people will think the result has a precision in the order of 64 digits. But if you put a number in the mix, the actual result may be only about 15 digits. Putting the number result of 1/3 or 40-38.6 in a mix with BigNumber values would give a misleading result of about 15 digits precision, not 64 digits. However when having a number with a limited amount of digits like 1.75, we know it does not contain round-off errors and we can safely convert it to a BigNumber.

@gwhitney
Copy link
Collaborator

Yes, I understand the point that we don't want to inject values with roundoff errors of 1 part in 10^14 or so into calculations with precision of 1 part in 10^64 or so, and I completely agree with that point. Nevertheless, there is something mathematically irritating that it will be fine to do (1/4)*bignumber(13) but not (1/3)*bignumber(13) -- and that is a result of the base of the internal representations. If we were using base 3 representations, it would be the other way around. If mathjs is striving for mathematical fidelity, the operation should not be dependent on the internal base of the representations and/or the fact that they don't match in the two representations.

The whole question is what values should we consider as exact, or more importantly, what values does the person using the library consider as exact? Those we should convert into bignumber automatically, using their exact value. So certainly we consider all integers as exact (we already do). And I suspect the original poster here was considering numbers with just a small number of decimal digits past the point like 38.6 as exact, and so the difference of two of them, like 40.0-38.6 = 1.4 should be exact. And small-number ratios should be considered exact, like 1/4 or 44/7, etc.

So the question becomes, how to recognize those "exact" values? I think that mathematical theory provides an extremely good and relatively easily implementable answer: JavaScript number entities that are within (say) 1 part in 10^14 or 10^15 of their nearest rational approximation that has a denominator less than or equal to (say) 2^10 = 1024. That 11-orders-of-magnitude difference between the size of the denominator and the closeness of approximation means that it's too big a coincidence to be accidental, and the person doing the computation meant to use the exact value, but was just expressing it in a convenient way that happened to induce roundoff error in IEEE arithmetic.

So the exact algorithm for auto-converting a number to a bignumber would be: compute the best rational approximation to the number with a denominator less than or equal to N (maybe N = 1024, maybe it's configurable; and I think Fraction can already do this -- anyhow it's easy with continued fractions), and then check if that rational is within epsilon (maybe use config.epsilon, or maybe something closer to IEEE precision) of the given number, and if so use the bignumber version of that exact rational, otherwise refuse to auto-convert. That would make the original poster's code work as expected, the numerator would be converted to the exact bignumber("1.4"), and it would make 1/3, 1/4, 1/7, etc all happily auto-convertable to bignumber, while preserving the helpful error you point out in your recent examples.

I think this conversion method actually gets at the spirit of what mathjs is currently trying to do with the 15-digit limit better than that limit does, while allowing more actually useful cases of auto-conversion. Just writing this as a suggestion to consider.

@josdejong
Copy link
Owner

So the question becomes, how to recognize those "exact" values? I think that mathematical theory provides an extremely good and relatively easily implementable answer: JavaScript number entities that are within (say) 1 part in 10^14 or 10^15 of their nearest rational approximation that has a denominator less than or equal to (say) 2^10 = 1024. That 11-orders-of-magnitude difference between the size of the denominator and the closeness of approximation means that it's too big a coincidence to be accidental, and the person doing the computation meant to use the exact value, but was just expressing it in a convenient way that happened to induce roundoff error in IEEE arithmetic.

Ahh, that sounds really interesting. This idea is new to me, but I'm definitely for improving conversion function from number to BigNumber! So, trying to understand this: take the example 0.1+0.2. This results in 0.30000000000000004, where the 17th digit is a round off-error. So then how do we exactly calculate whether this value has a round-off error? An approximation of the fraction is 3/10, which has an error of only 4e-17. So then we can conclude that the "correct" value is bignumber(3).div(10)? Is there any code or pseudo code that shows the exact logic?

@gwhitney
Copy link
Collaborator

Here's a relevant stack overflow answer: https://stackoverflow.com/a/4266999

The idea would be to call approximate_fraction on the input number with an epsilon something like config.epsilon or DBL_EPSILON times the input, and if the output has denominator less than some value (like 1024, or maybe configurable), you take that to be the "right" number. The basic underlying mathematical idea is that rational numbers that approximate real numbers more closely than 1/(denominator)^2 are already extremely rare, so if you find a rational that is within say 1/(denominator)^4 , you can be essentially certain it's not a coincidence. Hence if we use an epsilon roughly 10^-12, we should be safe with denominators less than 10^3; since DBL_EPSILON is roughly 2x10^-16, we could likely get away with 4-digit denominators, but I don't know that there's much practical reason to try to detect rationals with 4-digit denominators -- not sure how often they will come up in practice when the person using mathjs "meant" an exact rational.

@josdejong
Copy link
Owner

I think I more or less understand the idea. I'm in doubt though that it will work in all cases, and whether it would accidentally apply rounding to a value that shouldn't. I would like a conservative approach in that regard.

I was thinking: I can visually very easily see whether I'm seeing a round-off error: when it has more than 15 digits, and contains a series of zeros or nines followed by an other digit.

0.1 + 0.2          // 0.30000000000000004
0.1 + 0.24545      // 0.34545000000000003

40 - 38.6          // 1.3999999999999986
159.119 - 159      // 0.11899999999999977
159.11934444 - 159 // 0.11934443999999189

Can't we simply use that knowledge? Feels to me like a safer approach. I did some fiddling and created PR #3085. What do you think about that?

@gwhitney
Copy link
Collaborator

Confused. 0.333333333333333 is "exactly" 1/3 as far as doubles are concerned, so it should be converted to bignumber(1)/bignumber(3). Same with 0.666666666666667 and 2/3, or 0.142857242857143 and 1/7. There's no difference between these cases and 1/5 = 0.2000000000000001 (say) -- they are all equally well (or not well) approximated in IEEE doubles, because the denominators are all relatively prime to 2, the base of IEEE internal representation. They just "look" different to your decimal-trained eye. You are no more accurately detecting round-off error with a decimal-digit-based algorithm than with a continued-fraction-based algorithm. You're just missing lots of other "exact" values that are at least as well justified for automatically converting to bignumber. And the risk of "false positives" is no more (or less) with the continued fraction approach than with the decimal-digit-based one. 0.3333.... is just as "exact" for 1/3 as 0.3000000...4 is for 3/10.

As I said, it's rare for a rational to be within 1/(denom)^2 of a real, so the coincidence of it being within 1/(denom)^4 is so unlikely that we can treat it as exact -- which is all you're really doing with with the "digit pattern" heuristic, except you're only detecting a very small portion of the cases. In other words, a computation whose actual "exact" value is 0.300000000001 might come out to 0.300000000000004 and you would convert it to "exact" 0.3 and be wrong. If you are thinking "well, but that's so unlikely that we don't have to worry about it" -- in fact, I agree! The point is all the other cases the continued fraction algorithm detects are just as unlikely to be wrong, plus it gets all of the cases you can "see" should be treated as roundoff error with no further work, and all on a solid mathematical basis, to boot. So I would not recommend pursuing #3085.

@gwhitney
Copy link
Collaborator

P.S. When one uses the continued fraction approach, one has two "knobs" that let you control precisely how conservative the algorithm is: the "tolerance" which could be config.epsilon, and the maximum denominator you will detect. If you use DBL_EPSILON, you will basically be saying that you will only accept approximations that are off by no more than one binary bit in the least significant position. (So I am fine with going slightly fuzzier than that, even up to config.epsilon, as those tiny errors can accumulate a little via arithmetic operations, but if you want to be super-conservative we could leave it at DBL_EPSILON). The maximum denominator then controls the "probability of coincidence". With a max denominator of 1024, the chances of a coincidence are really miniscule, but for example 0.3647 would never be treated as exact, since its closest rational is 3647/10000. As I said, there is enough accuracy in doubles that I would also be totally comfortable with say 16384 as the max denominator, which would treat 0.3647 as exact.

If we want to stay really safe, then with a uniform distribution on the "intended" numbers, I don't see how we can treat any five-decimal numbers as exact, like "0.48977". On the other hand, it's probably true that there is a strong bias in the "intended" numbers toward exact fractions with denominators of the form 10^n, given the realities of human usage of the decimal system. If we are comfortable using that bias, we could accept all rational approximations that are within epsilon of the value to be converted, with a denominator that is either less than 1024, or that happens to be 2^m*5^n where n is (say) 8 or less. (We need to allow this form because if the decimal part happens to be even, some of the 2s in the denominator of 10^n will cancel.)

So bottom line, I think the continued-fraction algorithm will treat the most cases well, and gives us very close control over details concerning which numbers will be "recognized" and how conservative we are being in the conversion.

@josdejong
Copy link
Owner

I'm indeed not sure if this #3085 is a good approach.

I love to try out the fraction approach, I really want to improve on this conversion to BigNumber! At this point I'm not fully seeing how the tolerance of the fraction approach works out and what cases will "slip through", but it sounds promising. Anyone interested in working out a PR? That will clear things up for me I think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants