freenode/#sicl - IRC Chatlog
Search
9:00:27
heisig
Now that I think about it, there are several problems with this idea. But it might be possible to overcome those.
9:05:49
beach
I don't get it. If I define a macro M then the macro function of M is going to be called during macroexpansion. How do you propose writing a custom version of that macro function automatically?
9:07:53
heisig
The custom version would be identical to the regular one, except that it would internally use CSTs instead of objects.
9:09:26
heisig
One of the problems is when such a CSTified object is placed in a closure or a global data structure.
9:09:37
beach
So if my macro definition of M calls, say, REVERSE, a CST version of REVERSE would automatically be generated as well?
9:10:37
heisig
Right. I suggest this translation is made lazily, as soon as a function is called the first time in a macroexpansion context.
9:11:35
heisig
Now that I think about it some more, there definitely needs to be a way to 'give up' this special processing and to fall back to the usual one.
9:12:14
beach
Because otherwise we would have to keep all ASTs around, including those of proprietary application code.
9:14:21
beach
But I smell some decidability issues here. Like what objects are a CSTs and what objects aren't.
9:15:10
beach
And what about the code for CST itself? Are the objects it manipulates CSTs or not in this context?
9:18:32
heisig
I would say every object is CSTified unconditionally. And the fact that all objects are actually CSTs is only visible to the special macroexpansion code.
9:19:03
heisig
As soon as there is some confusion (like when storing a CSTified object in a global, specialized array), we give up.
9:20:10
splittist
Regexs can be documented. Not a great example, but one I have to hand: https://github.com/splittist/docxdjula/blob/master/regex.lisp
9:20:59
heisig
You may be right. But I haven't found them, yet. If an actual CST is handled by the macroexpansion version of a function, it would be stored as a CST whose value is that CST.
9:23:43
heisig
Anyway, introducing such a feature is not urgent. I just wanted to write it down for the logs.
9:23:44
beach
Suppose someone who knows nothing about this special expansion idea writes a macro that contains a CST as a constant. If the macro expander treats that constant as a CST it would change the meaning compared to what the macro author intended.
9:26:43
heisig
Those two CSTs would be on different planes of existence (I hope) and couldn't be confused. Think of the CSTs used when executing such a macro as a special object representation, where objects are represented as (actual-object origin) pairs.
9:28:43
heisig
So the only places where a confusion can happen is when such a special macroexpansion function is invoked, and when such a magic object is stored in a shared place.
9:46:27
beach
Meanwhile, I need to figure out a better way of explaining the register-allocation algorithm.
9:53:09
heisig
One thing I haven't understood yet about register allocation in SICL - why don't we start by defining a register allocation interface and providing a simple default implementation?
9:55:16
beach
That's pretty much what I am doing, but for x86 only. I have abstracted out EDU, and "allocate a new register". The rest is imposed by the x86 constraints.
9:58:13
heisig
Could these constraints be formulated in a way that is independent of the architecture?
10:00:08
beach
That is also what I am doing. I am currently translating MIR instructions such as c <- a op b into r <- r op s where a and b are lexical locations or immediate inputs, c is a lexical location, and r and s are register.
10:00:47
beach
So the architecture-independent description is that the destination must be the same as the first operand.
10:03:01
beach
Er, s can also be an immediate of course. But I am currently simplifying so that s can not be a stack location.
10:09:48
heisig
I was thinking about using a domain-specific logic programming language. (But I admit I don't have much experience with logic programming)
10:15:59
beach
Logic programming is usually a great way to turn a polynomial algorithm into an exponential one.
20:36:53
no-defun-allowed
I had an idea for handling a metadata table with a concurrent hash table. When adding a new mapping, if we have to use the next group, because the first is full, we set a bit somewhere indicating the first group was full, instead of trying to observe if the group is empty when removing, and CASing entire groups.
20:41:25
no-defun-allowed
This trick is an application of the "if a tree falls in the forest..." thought experiment, as we only track when a group is observed to be full. So a group that is filled, and then has some mappings removed, without another insert operation observing that the group is filled, will not be marked as filled.
20:46:50
no-defun-allowed
Around 47:26 in the Cppcon talk, Kulukundis discusses a group which fits into a cache-line, using 7 hash bytes, 7 presence bits, and 1 "ever full?" bit.
20:51:56
no-defun-allowed
Also, in the comments, he says he didn't see a benefit from using 32 byte groups (think 256-bit, AVX2 instructions) over 16. Implementing the latter group seems hard though, both with general-purpose instructions fooled into byte-parallelism and with SIMD instructions.