freenode/#sicl - IRC Chatlog
Search
11:19:15
no-defun-allowed
Do you have an opinion on another of Berger's presentations named "Mesh: Automatically Compacting Your C++ Application's Memory" <https://www.youtube.com/watch?v=XRAP3lBivYM>?
11:23:30
no-defun-allowed
Near the end there are some graphs that suggest there were some memory savings by using an interesting sort of compacting, but I've heard you say that fragmentation isn't a big problem with a good allocator.
11:48:04
beach
I know they worked on compacting memory at the time, but it was not to avoid fragmentation then. It was in order to avoid paging to disk.
11:48:41
no-defun-allowed
I think it is interesting (using some randomness and virtual memory to merge physical pages without manipulating pointers), but I was wondering what you thought of the occurrence of fragmentation to begin with.
11:50:41
beach
Well, Berger was part of the group, but maybe not one of the authors of the document they produced, that showed experimentally that fragmentation does not happen for programs that do useful things.
11:51:28
beach
I don't remember how much I have said about fragmentation, and the reason for my opinion about it, in the past.
11:53:05
no-defun-allowed
Firefox, Redis (a database), and a Ruby program were given as examples, which appear to do useful things (except that Firefox was tested with a benchmark program, and I don't know how the other two were tested.)
11:55:01
no-defun-allowed
No pressure though, I don't want to be distracting. Just wondered if there was something I might not have considered.
16:09:12
beach
no-defun-allowed: Oh, wow, this is bad. Berger is citing Robson which Berger's mentor (Paul Wilson) squashed to pieces because Robson (and all his pals at the time) used a random model for memory allocation and deallocation. Wilson essentially showed that this models is a good approximation only for programs that don't do anything useful.
16:13:38
mseddon
so the stochastic allocation made it look like a winner for a case that did not exist. ow.
16:14:36
mseddon
"random model of memory allocation/deallocation"- you remove everything worth optimizing.
16:16:29
beach
I see. Well, it is not about optimization this time. It is a simple matter of whether fragmentation is real or not. Robson's result is that it can be very bad indeed. But his results depends on a model that is a lousy approximation of the behavior of real programs. And that's the essence of the result by Wilson et al.
16:17:08
mseddon
right. whereas in real c programs for e.g. allocation falls into quite a hierarchical fashion for the most part.
16:18:35
beach
The other problem with this presentation by Berger is that he mentions the memory overhead of tracing GC compared to reference counting, but he is silent about the disadvantages of reference counting in terms of performance (and the fact that it can't handle cycles).
16:19:03
mseddon
well, that's not true, MacOS 9 and below were an embarrassing pile of shit, but that's not heap allocation's fault.
16:19:38
beach
When people say "French people are getting fatter every year", they don't mean every single French person.
16:19:55
mseddon
pjb: right, the shortest lived generation heaps are TINY and get thrown away almost immediately.
16:21:02
beach
pjb: The basis for generational GC is the "generational hypothesis" which says that objects either die very young, or they tend to get very old.
16:21:54
mseddon
hmm. I suppose it's not. It's really about the probability distribution of the lifetimes of objects in that case.
16:30:26
beach
By the way Robson is my colleague. One time I gave a talk to my colleagues and cited Wilson's result, not realizing that Robson was the main author of the results based on the inaccurate model. He was nice about it though. :)