Support Unicode
Closed this issue ยท 14 comments
Currently only ASCII is supported in strings. It should not be too hard to accept UTF-8 (raising an error for invalid input), and adjust internal string routines to support unparsing those strings correctly, as well as routines for iterating over codepoints, correctly determining the length (in codepoints), etc.
This would be great. How painful did this change look to be? I might be able to contribute if it's not huge.
My plan was to avoid having a dependency on ICU -- store everything
internally as wstring and assume that wchar is a unicode codepoint. Then
we just need to tweak the lexer to parse utf8 in string literals and the
string output function to render it back as utf8. It shouldn't be too hard
as I left some placeholders and TODOs in there. You're very welcome to
have a try at it.
I suggest 1) modifying the internal string representation in state.h 2)
modifying the output code to encode utf8 and testing it with std.char(x)
for x > 127 and 3) modifying the lexer to parse utf8. It would be possible
to run all tests and commit upstream at each intermediate point.
This would be great. How painful did this change look to be? I might be
able to contribute if it's not huge.
โ
Reply to this email directly or view it on GitHub
#1 (comment).
Great, thanks for the info @sparkprime. I'll update here if I get a chance to try it; I need more emoji in my json ๐ป
I really love Jsonnet BTW. My team is using it along with ApiDoc to create API documentation that doubles as a mock API server for developing apps against APIs that aren't finished yet.
Glad you like it!
I did some reading and it seems wstring is not what we want because it has UTF16 behavior on windows. So we probably need to do something like
typedef std::basic_string<char32_t> JsonnetString;
with functions to convert from UTF8-encoded std::string to that and back.
There are a bunch of places where the HeapString internal representation leaks out into other places as well, e.g. field names, std.extVar() keys, filenames (from std.thisFile) etc.
@hotdog929 you may be interested
I'm going to have a go at this because I think it's probably harder / more work than I originally thought.
That was a productive 4 hours ;)
Wow @sparkprime, way to kill it!!
Nice! :D
Perhaps I should also add a jsonnet_test
Bazel rule since it is possible to write tests in Jsonnet, such as the unicode.jsonnet
test you just added. :)
Looks like normal unicode characters (like โ
โ
โ
โ
etc) are working fine, but longer sequences for emoji (like ๐ -- "\xF0\x9F\x9A\x80") always become the sequence "\xEF\xBF\xBD\xEF\xBF\xBD\xEF\xBF\xBD\xEF\xBF\xBD"
I'm suspicious of the encode_utf8 method, but I'm struggling to understand what all the bit masking and shifting is doing.
I think I have a fix, looks like a typo on this line:
} else if ((c0 & 0xF8) == 0xF) { //11110zzz 10zzyyyy 10yyyyxx 10xxxxxx
Changing that to the following seems more right
} else if ((c0 & 0xF8) == 0xF0) { //11110zzz 10zzyyyy 10yyyyxx 10xxxxxx
Submitted a fix as #78.
I didn't see an easy way to test this as the \u
escape sequence only supports 4 hex digit escape sequences (ie up to character code 0xFFFF). So adding this is invalid:
std.assertEqual("\u1F680", "๐") &&
One solution for testing could be to add support for the ECMAScript6 code point escapes (like \u{1F680}
).
If you have another idea for testing, I'd love to hear it!
Thanks for tracking this down!
I suppose you can do things like "๐๐๐"[1] which should == "๐".
\u{XXX} should be a no-brainer though, it could be added in the lexer quite easily.
I have been worried for a long time about the limitation of \u and whether it's necessary to support e.g. things like this as well https://bugs.launchpad.net/zorba/+bug/1024448
No problem! It was enlightening to learn more about the inner workings of unicode