libera/#sbcl - IRC Chatlog
Search
9:04:59
frodef
Is it possible for sbcl to go "heap exhausted" while there is lots of garbage that would have been ousted by (sb-ext:gc :full t) ? Or does "heap exhausted" mean there must in fact be too much live data?
9:10:52
hayley
But I don't know if that memory is really reclaimed per se, that it can be reused without another normal moving collection.
9:14:18
hayley
Right, but frodef refers to a full collection fixing things. Though maybe I forgot the semantics of :full t; I swear it got changed recently-ish.
9:16:41
hayley
Someone please correct me if I have the facts wrong; my memory is that SBCL will reserve adequate space to always be able to complete a nursery collection, more or less, but not necessarily enough for a more major collection.
9:20:04
hayley
Doug and I had spoken on sbcl-devel about triggering GC when a thread fails to allocate; he said he tried it, but the GC would then fail to copy. So it seems the proper approach to avoid heap exhaustion is to have enough (read: half the) space reserved in case of doing a major GC, which no one really wants.
9:21:39
hayley
My assessment might be biased by having a non-moving GC kinda working; though that will also fail to trigger GC in time, if the fragmentation it doesn't consider is too bad.
9:26:49
hayley
Which reminds me to ask - I'm still not sure how to go about making immobile space work with the new GC. Doug suggested using my collector to manage the immobile space, but I still have no way to defragment (even for S-L-A-D) and I have to wonder what goes wrong if I don't otherwise look like an immobile space (e.g. by having generation bits in the header).
9:43:40
frodef
I'm not doing anything more "exotic" than serving SSL/HTTPS (with Hunchentoot), I believe.
10:43:08
stassats
i'm skeptical of some magic gc schemes to avoid heap exhaustion, because instead of "why did my sbcl crash" we'll get "why is my sbcl slow randomly"
10:45:08
frodef
stassats: hm... that's kind of the whole point and problem with GC generally, isn't it :)
10:50:42
hayley
stassats: I heard Java throws OutOfMemoryException if it deems you spend too long in GC.
11:03:02
hayley
Had also implemented a "panic mode" which would do more major collections if we were tight on heap, and experimentally it'd avoid heap exhaustion often, and would not be used more than once in a row.
11:47:40
hayley
Though, I don't know how to trigger panic mode with a copying GC, without requiring a painfully large reserve.
11:49:50
stassats
if you just want to avoid crashing a lot of things can be done, a lot of slow things
11:50:28
hayley
But there was a paper on a GC which would copy, then fall back to in-place if it ran out of reserve.
11:51:32
stassats
instead of forwarding pointers a hash-table could be used and fully evacuated pages could be reused
11:53:05
frodef
My (server) application has a background thread that every 10 minutes does a bunch of downloading of new objects. Is there some mechanism for me to say something akin to "in this dynamic scope put all in one nursery then promote live objects to semi-permanent generation upon exit of scope"?
11:53:22
hayley
That sounds a bit like Pauseless, but I don't see how it helps in a STW context...ah, if we can fit the hash table for multiple pages on one page, yielding free space during GC?
11:54:24
hayley
I don't believe so, and in general it'd require some magic to be able to promote part of a generation in place, as the cards would have to he finessed.
11:57:25
hayley
Another thing is that Pauseless does a mark phase before copying, so it'd already know which objects are dead and could use smaller forwarding pointer tables.
12:01:40
hayley
I hear that can be done in parallel easily, or concurrently too (without a read barrier).
12:10:47
hayley
Speaking of moving large objects, I'm slowly contemplating a final compaction algorithm for my GC. Defrag small objects as usual, accumulating them all at the start of the heap, then use an external table to store forwarding pointers for large objects and slide them down the heap. Can this create holes in the heap somehow?
12:12:26
hayley
My current line of thought is that if sliding a large object exposes a hole in the heap, we did not correctly move all the small objects, as those objects should occupy the new location of the large object.
13:35:16
Krystof
frodef: the new, draft arena allocation thing feels like it might be useful in that use case. It doesn't do exactly what you said, but it does allow you to have essentially your own ephemeral heap for certain allocations, as long as you know what will need to be promoted at the end
13:37:10
frodef
Krystof: hi! In what sense would I need to know what will need to be promoted? An explicit set of "roots"?
13:37:36
Krystof
an explicit set of roots, or maybe all instances of a particular class, or something
13:37:57
Krystof
I know less than what is in those notes files. I'll let dougk know that you might have a use for it
13:39:09
Krystof
we definitely need help in working out whether what we've got is close to other people's use cases. (We believe it's close to QPX's)
13:41:13
Shinmera
They potentially solve an issue I have in my vector/matrix library code, wherein some other operation creates a temporary vec that only has dynamic extent, but because that operation creates the vec it cannot be stack-allocated
13:42:10
phoe
yeah, to make it work you'd need to explicitly create the vec yourself, DX it, and pass it into that operation so it can be mutated - but that leads to ugly code
13:42:52
Shinmera
I have some ideas on how to macrofy that to make it less ugly (a sort of cheating with-vec let that turns the "pure" operation into an impure one with the stack-allocated vec) but it's still annoying.
13:43:01
jackdaniel
the next extension will be called drones - erlang-like lightweight processes that have access only to their arena heap (not to the global heap)
13:52:56
drakonis
erlang-like lightweight processes would be unironically excellent to have, provided they don't have some of the limitations that erlang has like the atom table having a fixed size that isnt ever garbage collected
14:01:50
drakonis
and they bring down the whole cluster if you try to write code that generates atoms dynamically
14:02:35
phoe
for that, use tuples with some constant atom parts and "movable" GCable other parts instead.