freenode/#sicl - IRC Chatlog
Search
11:55:15
heisig
Dammit that is the same justification that the arms industry uses. Maybe I should just shut up.
13:51:33
Bike
i figure most people would stick with locks, and then of the fewer people using atomics most would use sequential consistency (which is the default), and then a few Real Hackers like heisig can use the other stuff
13:52:03
Bike
and in the meantime the implementation makes as much as possible unordered even without the programmer saying so, so that if they make a mistake they just get weird results instead of crashing
13:55:40
Bike
the C++ thing i'm leaving out is memory_order_consume, which as far as i can tell has been in the C++ standard for ten years without anyone implementing it properly or even understanding how it works
14:03:32
Bike
but yeah, i don't know, they're definitely partly involved. one of these papers has torvalds as a contributor
15:24:56
Bike
ok, next question. SBCL defines an operator atomic-update that uses CAS to perform an arbitrary update on a place, i.e. it does an atomic read-modify-write with some function and arguments you pass in. it does that with a CAS loop. The question is: in sbcl's implementation, the update function form and the extra arguments can be evaluated multiple times. is that sensible?
15:25:12
Bike
like, you can do atomic-push (though that has its own definition) as (atomic-update place #'cons t)
15:25:33
Bike
and the #'cons and t forms might be evaluated multiple times. this is apparently intentional but i'm not sure i understand why
15:27:03
beach
Sounds like a question for karlosz or scymtym. Or perhaps you are just thinking out loud?
15:28:06
Bike
well, kind of. it would be nice if i could see some uses of atomic-update to understand what makes more sense with how it's used, but there aren't a lot of uses around. but maybe somebody here has ideas
17:31:31
Bike
a paper drmeister linked in #clasp a few days ago might be interesting. http://users.cecs.anu.edu.au/~steveb/pubs/papers/yieldpoint-ismm-2015.pdf it's a fairly detailed performance analysis of "yieldpoints", meaning the compiler inserting a go-to-GC (or go-to-profile, or etc) check on every loop backedge and function call. i'm not sure if SICL uses something like that for GC, but it does remind me of the debug
17:33:08
Bike
among other things they find that the overhead is pretty low, and some specific details of the distribution, e.g. in all their benchmarks they found that a small subset of safepoints accounted for almost all safepoint checks dynamically.
17:33:34
Bike
er, that's kind of confusingly phrased. I mean, only a small subset of the generated safepoints.
17:35:11
beach
So are they saying that most of them can be eliminated? I don't see how that could be possible.
17:36:01
Bike
i don't think they make any recommendation, but it makes sense to me. i mean, only a small subset of code is "hot", right? probably a safepoint in something like CONS is executed very frequently
17:36:45
Bike
"For example, in places where the compiler makesan assumption regarding type specialization or an inliningdecision, it inserts a group-based yieldpoint before the spe-cialized or inlined code. Whenever the run-time system no-tices that the assumption breaks, it enables that group ofyieldpoints to prohibit further execution of the code un-der false assumption. Code that reaches the enabled
17:36:51
Bike
yield-points will take a slow-path, where the run-time compiler can make amends and generate new valid code" seems relevant to call site optimization
17:38:13
Bike
ah, here we go. they do suggest that the safepoints that are executed extremely commonly could use a more performant mechanism, like patching the code (so if the safepoint isn't active there's just a nop, and if it is there's a branch)
17:38:34
Bike
which would be impracticably slow if you did it for every safe point since patching code ruins caches and stuff
17:51:12
beach
My (admittedly small) family just announced that dinner is served. I'll be back tomorrow as usual.