-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parser structure #939
Comments
There is no real reason for it. Personally, I really like this idea, especially after csstree show it’s great parser with Do you want to update current parser and tokenizer? Or you could take |
I could do both, it's IMO a question to you and core team, what will be better for further plans. |
@ai Ok, I gonna read through |
@ai As far as i can see, it is pretty possible to backport tokenizer from |
Awesome! 😍🎅 Feel free to ask any help. |
Could you show some structure example? But I think separating set and parser is a good thing. |
Externalizing tokenizer to separate package is even better thing. :D Such tokenizers works well for JSON parsing too. :) Just seen |
@tunnckoCore you could write much better tokenizer for JSON only format :). |
I use this option in Safe Parser. |
Done in |
While was reading throught
postcss
sources, found a moment that current parsing architecture looksWhile it works fine, I am really wondering if there are any reasons for not using
tokens streaming
likebabylon
do for example, where you plug tokenizer in parser, or expose something likereadNextToken
?In this manner you only scan source once, instead of scanning source and then iterating over tokens, that could provide significant performance boost.
The text was updated successfully, but these errors were encountered: