libera/#sicl - IRC Chatlog
Search
3:07:03
hayley
After scymtym and flip214 helped me out in #sbcl, I modified my regular expression compiler to produce better code, and it now can scan up to 1.2 billion characters a second on one core (which is about 2.5 cycles per character).
3:08:00
hayley
But, as you found with the HIR compiler, Common Lisp compilers can get pretty slow with more complex regular expressions. So I am currently thinking to use a "tiered compilation" strategy, where we use a chain of closures (like the HIR evaluator) for infrequently used regular expressions.
3:11:32
hayley
The chain of closures would presumably still be quite fast, as it represents a DFA and not an NFA, and type specialisation could still be applied with closures.
3:15:14
hayley
stylewarning already wants to use my library to make the Quil (his quantum programming language) parser run faster, but I've never heard of a parser causing performance issues. And someone else has offered to implement a parser for POSIX regular expressions.
3:17:30
beach
You mean a parser that uses your regular expression software to parse POSIX regular expressions? As opposed to modifying your software to accept POSIX regular expressions?
3:19:11
hayley
The latter. Currently I use a made-up syntax which approximates the mathematical notation for regular expressions.
3:20:40
hayley
Right, the language of regular expressions is not regular. It looks very strange to use a parser for context-free grammars in a regular expression implementation :)
5:31:46
tich
I would like to chat about my intentions a bit in case you are wondering why I pop up and dont code
5:44:34
hayley
In other news, which you might or might not have spotted already, there will eventually be a new edition of the Garbage Collection Handbook <https://twitter.com/profrejones/status/1438216193245069314>
6:18:09
tich
beach: Thanks I am still learning RISC-V once I am done I can start to work on the assembler