libera/#sicl - IRC Chatlog
Search
22:11:52
sm2n
moon-child: here's some very wip code that uses it: <https://git.sr.ht/~sm2n/rapids/tree/trunk/item/rapids.lisp#L144>
22:37:09
moon-child
beach: I understood you intend to represent structure objects the same way as standard objects, and to uniformly allow redefinition and change-class on them
22:41:32
moon-child
sm2n: interesting. This almost reminds me of the co-dfns approach to parsing: start with a very rough understanding of the thing parsed, and then refine that
22:58:37
moon-child
so it does multiple passes over the input, each time gleaning a bit of information
23:19:23
hayley
jcowan: Without type inference I need another tag test before running the read barrier, which is also unsettling.
23:37:53
moon-child
I mean, presumably parse output is a tree, so you don't have to worry about having lots of pointers to the object
23:51:08
hayley
Are there any good heuristics to decide when to compact? I recall Immix would compact when the allocator couldn't make any use of one page, and LXR compacts when both RC and marking didn't free up enough space.
23:55:04
hayley
And I am a bit worried about how to copy a page when each (128 byte) line has its own generation, without making fragmentation worse. It might be okay to just copy one generation at a time, since the allocation rate will be higher for younger generations, and thus fragmentation will get worse faster for them.
23:55:50
moon-child
the object is only pointed to from one place. So, assuming you know what that place is, you can, instead of changing the existing object, make a new one, and change the pointer to point to it instead
0:05:15
sm2n
to do that I'd have to pull a bunch of data out of the old class to put into the new class
0:06:47
moon-child
and change-class doesn't do anything more than a slot-to-slot correspondence anyway (modulo update-instance-for-different-class), so
0:19:35
hayley
The LXR paper makes no mention of popular objects, which have been a bane for the incremental-ness of copying in G1 and the Train in recent history. Guess I should ask if I need to worry about attempting to copy those.
0:21:26
hayley
moon-child: With regards to "tail recursion" when tracing, the MMTk chat mentions "When scanning this [large] list [in one benchmark], you normally have to recursively generate 60000 process edge work packets, and each packet contains only one edge. Currently for LXR, I addressed this by processing the list in a loop, without creating any work packets, or pushing to any work queues."
0:28:57
hayley
SBCL already has that sort of tail recursion when copying, though I think it's intended to get the list laid out contiguously in memory, more than avoiding having to check the grey set more often.
2:19:15
hayley
Seems to work now, but I have to get home to test performance. And my ability to create GC bugs still is quite good.
3:10:57
beach
jcowan: You don't need to check that a rack is obsolete. This is done automatically by the generic-dispatch mechanism. Every generic function applied to an object with an obsolete rack will fail to find an applicable method. The technique is documented in my paper on generic dispatch from ILC 2014.
3:13:55
beach
moon-child: Yes, structure objects are standard objects. I don't know whether I said anything about allowing change-class on them. I think I haven't said anything because I don't care much about structure objects, but it seems reasonable to allow. However that's not what you said. You said that all classes that are not built-in classes are standard classes, and that's very different.
3:30:10
hayley
I asked the authors if the LXR collector had to handle popular objects specially when compacting. They said that they have to fix fewer pointers, than if they only reclaimed memory by copying, so it's not an issue. Which is a good sign, but I'll have to check, and I'll have to add some sort of pause time prediction when I implement compaction.
3:41:18
hayley
At the moment the serial mark-region collector makes my parallel program "only" 13% slower; I'm not sure what I've missed to make it faster, without using parallelism.