New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NUMERIC type binary decode error #1935
Comments
Thx for the report. Seems PostgreSQL sends 100 instead of 10. Will look at it |
Why are you wanting to enable binary support for numeric types? The binary format does not align well to the BigDecimal type, making for a fairly inefficient translation. The default string format ends up being less work. |
Even though binary vs text might be the same for numerics, we might want to support binary for numerics so |
@vlsi it is not that binary vs text is /same/, it is that binary (particularly the current translation) appears highly likely to be slower. I have an implementation[1] of binary support of numeric that I worked on back in March. While this supports both sending and receiving and appears to be more efficient than the current implementation, I never created a PR because I could not find a reason to ever use binary over String (even for arrays). |
I agree the current implementation does not look the most performant one, however, I believe binary mode unlocks binary for structs and arrays which might be way more efficient than coding/decoding arrays/structs into the text format.
Technically speaking, it is either database sends shorts as is and we convert them to string or the database converts ints to a string and we receive the string. |
That is strange. Do you have benchmarks by chance? |
The binary format for these seems so odd to me (mix of decimal and binary) that I assumed some other form was actually used natively in the database that would be better for calculations (indexing, equality, comparisons, addition, etc.). This lead me to conclude that if the db is doing work to translate from internal form to either this binary format or string, and the java side handles the string much better, then just let the db go straight to string. If this binary format is the actual native format of the database, then I agree, it makes sense to do the conversion (even if ugly) in java.
I do not. My observation from the code was that for single dimensional arrays, the handling of the numeric->big decimal would overwhelm the benefits of knowing the array length ahead of time. This all feels a bit like deja-vu: #1749 |
Currently, PostgreSQL calls Here's The text output is here: https://github.com/postgres/postgres/blob/b4d5b458e6fb4c861f92888753deea6a5005a68d/src/backend/utils/adt/numeric.c#L739 |
Describe the issue
NUMERIC type binary decode error
Driver Version?
42.2.18
Java Version?
1.8
OS Version?
Windows
PostgreSQL Version?
12.0
To Reproduce
Expected behaviour
Expected
1
+ 900
s but actually got1
+ 890
s.SELECT 1E+89
got the expected value.The text was updated successfully, but these errors were encountered: