libera/#sicl - IRC Chatlog
Search
5:39:21
moon-child
out of curiosity, does closos stand for anything in particular? 'common lisp object system operating system'? 'common lisp operating system (the operating system)'?
5:45:02
moon-child
my presumption was that it would have been 'common lisp operating system', but 'clos' was taken
5:46:32
mfiano
beach is a strong proponent of CLOS, and will yell it loudly if he gets the chance, and he did.
5:51:17
jcowan
SICL (and a fortiori CLOSOS) are CLOS_first: everything is a standard object except conses and immediates.
5:51:28
Bike
huh, the pdf doesn't actually mention what it stands for that i can see from a quick skim
5:55:25
jcowan
I'm not even sure that making conses not-standard objects is the Right Thing. Yes, it saves space, but it's not clear that conses really chew up so much space in toto. I don't know of any Scheme (except perhaps MIT Scheme) that uses a 2-word cell for conses.
6:11:43
hayley
If recompilation is acceptable, an indirection could be saved by only taking a Brooks read barrier while there are obsolete instances. Else it can be assumed that the read barrier is idempotent anyway. (Can instances be updated in the background? In another thread?)
6:12:49
beach
I think CONSes are used a lot for temporary things like lists, so there is a lot of allocation and garbage collection devoted to those. But this opinion is not based on any concrete evidence.
6:14:22
hayley
If the class CONS can never be redefined, then there is never a need for a forwarding pointer, as no code would ever try to follow a forwarding pointer. So a cons cell can be two words and like a standard object, by making other standard objects more like cons cells.
6:21:19
hayley
Like the omnipresent debugger, the barrier would be optimised (which I borrowed from Stopless, for anyone playing at home) by having versions of code with and without barriers. I'm also left to wonder if the barrier also suffices to implement concurrent compaction, too, since that's where the idea comes from.
6:22:21
moon-child
by the by: can we have 32-bit class + 32-bit hash, and then fix all the objects once somebody makes a hash table with >4g elements?
6:31:13
hayley
moon-child: Would it be possible to jump around read barrier code, and then patch jumps to not jump around read barrier code? Each thread would have to invalidate code caches, but threads needn't all be stopped at once. Might involve too much code bloat though.
6:42:04
hayley
Sure. And I don't have an idea for how to end up healing, so the read barrier couldn't be disabled either.
6:43:46
moon-child
I wonder about making the barrier a function you call, and can patch out. But idk if that's better than an inline branch
6:52:14
hayley
I've confused myself at the moment. Guessing that, without deoptimisation, there still needs to be some way to handle class redefinition anyway.
7:01:33
hayley
This would be much easier with deoptimisation, but we'd still have to deoptimise any code that uses standard objects. It would suffice to handle requests to deoptimise at safepoints; that's what Stopless does. Would type inference eliminate an interesting number of barriers here?
7:03:14
hayley
I'm not sure how to do self-healing either, with the barrier for standard objects being rather lazy. It's not easy to maintain that the mutator only ever sees up-to-date pointers, and we'd need a barrier before EQ too.
8:23:40
hayley
moon-child: We'd only need to de-optimise to make EQ work, right? There's otherwise no need for other threads to find non-obsolete versions of objects, as it's a race, and we only need to preserve object identity when racing.
9:39:57
jcowan
The fact that everything is a standard object doesn't mean that everything is an instance of a standard class. We can say that elements of class CONS can't have their class changed, for example.
10:30:37
hayley
I came up with a read barrier which would eliminate the double indirection entirely, but on the other hand, it's a read barrier, and I'm not sure if it's a great idea to fix all pointers in the heap, even if it's done concurrently and helped by background threads too. Also not sure how two concurrent redefinitions might interact with each other, as one might need to undo progress of the other.
10:32:55
hayley
It would also provide for "free" concurrent compacting; but I'm not sure how garbage collection might interact with redefinition too, as a concurrent collection using the same barrier and metadata could undo the progress of redefinition, and vice versa.
10:39:19
hayley
Suppose each pointer (n.b. pointer, not object) has another tag "visited" or "not visited". We redefine a class, and begin visiting every pointer. Before we finish, however, we redefine another class. As the last visit didn't consider the latter redefinition, we need to un-visit all pointers, before we can start visiting again.
10:40:32
hayley
We can't correctly do multiple concurrent visits, in other words. This seems especially nasty when garbage collection is involved.
10:42:21
moon-child
if I stack up an infinite stack of redefinitions, you'll never finish any of them
10:43:08
hayley
If not much of the heap was scanned, it might be faster to un-visit and start over, but one shouldn't do that too many times.
10:44:27
moon-child
oh, yeah, cus the max time you can stall the gc for is the time it takes to process a single definition. So probably no danger of its getting too far behind the mtuator
10:46:49
hayley
It'd still be tricky to get each thread to de-optimise without the kind of de-optimisation associated with JIT compilers though. Safepoints, of course, but we still need to enter de-optimised code, or code optimised for new layouts.