freenode/#lisp - IRC Chatlog
Search
3:00:54
drmeister
pjb: We switched to using unlink'd files - that's the approach gnu make uses when you specify -j<#>.
3:01:24
drmeister
So we let the children finish writing into the files and then the parent reads the results out.
3:08:50
ck_
I haven't been here in a long while. What has changed? Has Clasp been finished, can I compute a supermolecule a la carte with it already?
3:23:41
ck_
Common Lisp seems pretty finished. Especially compared to newer languages. I could tell you some stories about clojure meetups...
3:26:02
aeth
What happens is one implementation does an extension and if it's popular it slowly trickles everywhere else, either directly or with a portability library over it. And this slow process just recently has gotten the "package local nicknames" extension in most implementations (at least by users)
3:26:27
aeth
So even the language itself isn't static. Most notably stuff like MOP, CFFI, bordeaux-threads, etc.
3:26:40
drmeister
ck_: Both lldb/gdb and slime. I'm using the standard C++ approach of DWARF metadata and exposing that to slime.
3:26:55
ck_
I already yielded. What I meant with that line is: compared to newer, even lisp-like languages, I very often want my common lisp job back because, comparatively, it is at a much higher level of maturity
9:25:50
LdBeth
If a language is very terse (TECO), where almost every token is one character, is there any possibility benefiting from using a parser generator?
9:45:43
beach
Then it has nothing to do with the tokens themselves, but with the fact that the grammar is very simple too.
9:49:37
beach
Implementing a typical language often consists of defining two separate things, namely a lexer and a parser. The lexer turns sequences of characters into tokens, and the parser turns the sequence of tokens into a parse tree (usually). If the tokens are simple, the lexer is simple. If the grammar for combining tokens into parse trees is simple, then the parser is simple. The two are orthogonal.
9:55:18
aeth
I wonder where optimizations fit. These are especially key when you have a minimal command language where you're supposed to compose things. One case would be +++++ which in a hypothetical simple language taken literally would mean (progn (incf x 1) (incf x 1) (incf x 1) (incf x 1) (incf x 1)) but can be optimized to (incf x 5)
9:55:34
aeth
It seems like this kind of thing could be done at either the lexer or the parser, and would complicate even a seemingly simple language.
9:57:57
beach
The role of the lexer and that of the parser are often blurred. In most real languages, a lexer is in fact not possible as a first step. It has to know the context as defined by the grammar. Also, there are parsing techniques that do not need a lexer at all.
10:29:58
White_Flame
aeth: generally the parser does not optimize. Once you have the parse tree or AST or whatever, then you perform that sort of analysis in the compiler
11:32:18
White_Flame
I guess that depends on if that concatenation is a lexical construct, a parser construct, or neither
11:33:10
White_Flame
obviously your language definition could demand that at any level, or leave it optional for the compiler to optimize. It would be fairly transparent to the actual source code, but not to implementers
12:29:48
ck_
it reminded me to see whether the source is available, or somebody ported it to today's CL