davesnx/styled-ppx

Unify CSS parser and lexer

davesnx opened this issue · 3 comments

css_parser.mly parses stylesheets/selectors/declarations/properties and values

Parser.re currently parses (and type-checks) properties + values (there's code in Parser.re that might make it possible to parse stylesheets/selectors and everything but isn't used)

Parser.re uses value.rec ppx whichs generates the parser fns for each property, while css_parser.mly uses menhir. I'm more inclined on having a hand-made parser or push for menhir Incremental API in order to call the value.rec's parsers, but it's up to see if it's a good idea.

The driver calls one and the other depending at which stage of the parsing happens, in order to have success with #429 it's necessary to have more control over the parsing phase.

Resources

Mix Lexer and Parser like https://github.com/mnxn/eon and take a look at Sedlexing from here: https://github.com/FStarLang/FStar/pull/2203/files

Currently we use 2 lexers that live inside Css_lexer, and run separate test suites: Tokenizer_test and css_lexer_test.

We need to unify those, expose a single API to tokenize a string of CSS and treat errors as an exception (Css_lexer.Error) (since menhir needs it), run a single test suite.

After the unification of lexers, we have only one. We still have 2 separate parsers with 2 separate techniques (menhir and rules with combinators and the ppx).

Would be nice to join them into the same package. There's a few benefits of having them togeter:

  • No more source_of_loc
  • Reuse locations, errors and machinery

There's a bit of work left to unify the lexers which is the API, we have from_string and tokenize where one gets a recursive token structure, and the other a list of tokens. Both contain locations, it's a matter to only use one of them