Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix handling of surrogate pseudocharacters under Python 3. #284

Closed
wants to merge 3 commits into from

Conversation

gnprice
Copy link

@gnprice gnprice commented Aug 29, 2017

This is a situation where we have a Python unicode string which doesn't
consist entirely of genuine Unicode characters -- some of the codepoints
in the string are surrogate codepoints, which occur in a UTF-16 encoding
of a string and were also repurposed in PEP 383 for losslessly encoding
arbitrary mostly-UTF-8 bytestrings (like Unix filenames) in Python
strings. Currently, on Python 3, we cause a UnicodeEncodeError if we
try to encode such a string as JSON.

It's not 100% obvious what the right thing to do here is -- this
situation seems like it must reflect a bug somewhere else in the
program or its environment. But

  • one way we can get such a string is by loading a JSON document
    (perhaps an invalid JSON document? anyway, we load it without error):
>>> ujson.dumps(ujson.loads('"\\udcff"'))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'utf-8' codec can't encode character '\udcff' in position 0: surrogates not allowed
  • we already pass these strings through without complaint on Python 2;

  • as the included test shows, passing these through matches the
    behavior of the stdlib's json module.

So it seems best to pass them through.

Fixes #156.

This is a situation where we have a Python unicode string which doesn't
consist entirely of genuine Unicode characters -- some of the codepoints
in the string are surrogate codepoints, which occur in a UTF-16 encoding
of a string and were also repurposed in PEP 383 for losslessly encoding
arbitrary mostly-UTF-8 bytestrings (like Unix filenames) in Python
strings.  Currently, on Python 3, we cause a UnicodeEncodeError if we
try to encode such a string as JSON.

It's not 100% obvious what the right thing to do here is -- this
situation seems like it must reflect a bug somewhere else in the
program or its environment.  But

 * one way we can get such a string is by loading a JSON document
   (perhaps an invalid JSON document? anyway, we load it without error):

   >>> ujson.dumps(ujson.loads('"\\udcff"'))
   Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
   UnicodeEncodeError: 'utf-8' codec can't encode character '\udcff' in position 0: surrogates not allowed

 * we already pass these strings through without complaint on Python 2;

 * as the included test shows, passing these through matches the
   behavior of the stdlib's `json` module.

So it seems best to pass them through.

Fixes ultrajson#156.
Copy link

@hartwork hartwork left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if passing through is the best approach — stdlib json does not pass through but escapes (avoiding invalid characters in the output), see:

In [11]: list(sys.version_info)
Out[11]: [3, 6, 10, 'final', 0]

In [12]: json.dumps('\udcff')
Out[12]: '"\\udcff"'

Comment on lines +53 to +55
#define PyUnicode_AsUTF8String(o) \
(PyUnicode_AsEncodedString((o), "utf-8", "surrogatepass"))

Copy link

@hartwork hartwork Feb 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code seems unused?

If you're aiming for surrogatepass as a generic solution, it's a recipe for producing invalid UTF-8:

In [6]: '\udcff'.encode('utf-8', 'surrogatepass').decode('utf-8')                                                                                      
[..]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 0: invalid continuation byte

Are you aware?

JustAnotherArchivist added a commit to JustAnotherArchivist/ultrajson that referenced this pull request Apr 17, 2022
This allows surrogates anywhere in the input, compatible with the json module from the standard library.

This also refactors two interfaces:
- The PyUnicode to char* conversion is moved into its own function, separated from the JSONTypeContext handling, so it can be reused for other things in the future.
- Converting the char* output to a Python string with surrogates intact requires the string length for PyUnicode_Decode (or any of its alternatives). While strlen could be used, the length is already known inside the encoder, so the encoder function now also takes an extra size_t pointer argument to return that. This also permits output that contains NUL bytes (even though that would be invalid JSON), e.g. if an object's __json__ method return value were to contain them.

Fixes ultrajson#156
Fixes ultrajson#447
Supersedes ultrajson#284
JustAnotherArchivist added a commit to JustAnotherArchivist/ultrajson that referenced this pull request Apr 17, 2022
This allows surrogates anywhere in the input, compatible with the json module from the standard library.

This also refactors two interfaces:
- The `PyUnicode` to `char*` conversion is moved into its own function, separated from the `JSONTypeContext` handling, so it can be reused for other things in the future (e.g. indentation and separators) which don't have a type context.
- Converting the `char*` output to a Python string with surrogates intact requires the string length for `PyUnicode_Decode` & Co. While `strlen` could be used, the length is already known inside the encoder, so the encoder function now also takes an extra `size_t` pointer argument to return that and no longer NUL-terminates the string. This also permits output that contains NUL bytes (even though that would be invalid JSON), e.g. if an object's `__json__` method return value were to contain them.

Fixes ultrajson#156
Fixes ultrajson#447
Supersedes ultrajson#284
JustAnotherArchivist added a commit to JustAnotherArchivist/ultrajson that referenced this pull request Apr 17, 2022
This allows surrogates anywhere in the input, compatible with the json module from the standard library.

This also refactors two interfaces:
- The `PyUnicode` to `char*` conversion is moved into its own function, separated from the `JSONTypeContext` handling, so it can be reused for other things in the future (e.g. indentation and separators) which don't have a type context.
- Converting the `char*` output to a Python string with surrogates intact requires the string length for `PyUnicode_Decode` & Co. While `strlen` could be used, the length is already known inside the encoder, so the encoder function now also takes an extra `size_t` pointer argument to return that and no longer NUL-terminates the string. This also permits output that contains NUL bytes (even though that would be invalid JSON), e.g. if an object's `__json__` method return value were to contain them.

Fixes ultrajson#156
Fixes ultrajson#447
Supersedes ultrajson#284
JustAnotherArchivist added a commit to JustAnotherArchivist/ultrajson that referenced this pull request May 30, 2022
This allows surrogates anywhere in the input, compatible with the json module from the standard library.

This also refactors two interfaces:
- The `PyUnicode` to `char*` conversion is moved into its own function, separated from the `JSONTypeContext` handling, so it can be reused for other things in the future (e.g. indentation and separators) which don't have a type context.
- Converting the `char*` output to a Python string with surrogates intact requires the string length for `PyUnicode_Decode` & Co. While `strlen` could be used, the length is already known inside the encoder, so the encoder function now also takes an extra `size_t` pointer argument to return that and no longer NUL-terminates the string. This also permits output that contains NUL bytes (even though that would be invalid JSON), e.g. if an object's `__json__` method return value were to contain them.

Fixes ultrajson#156
Fixes ultrajson#447
Fixes ultrajson#537
Supersedes ultrajson#284
@hugovk
Copy link
Member

hugovk commented Jun 1, 2022

Superseded by #530. Thanks!

@hugovk hugovk closed this Jun 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

UltraJson doesn't behave the same way as json.JSONEncoder for unicode chars
3 participants