freenode/#clasp - IRC Chatlog
Search
2:55:42
drmeister
::notify Bike So - it turns out that our SharedMutex had a flaw. I don't know how any of the multithreaded code was working.
2:56:36
drmeister
::notify Bike The thread sanitizer showed the problem immediately. I added a new implementation of SharedMutex and I'm testing it now.
3:02:53
no-defun-allowed
Looking at the source code, is SharedMutex a readers-writer lock or something?
3:09:34
drmeister
This https://github.com/clasp-developers/clasp/blob/master/include/clasp/core/mpPackage.fwd.h#L169
3:10:28
no-defun-allowed
I think so too. Shouldn't it be, say, 1 to lock g with at least one reader?
3:11:06
drmeister
But even with 1 - thread sanitizer doesn't like it for a different reason. Something about cyclic locks.
3:11:13
drmeister
So I've switched to this: https://www.codeproject.com/Articles/1183423/We-Make-a-std-shared-mutex-10-Times-Faster
3:11:39
no-defun-allowed
I wanted to implement a r-w lock for my networking code, but there weren't enough read-only parts for it to actually help; I still vaguely remember how it works though.
3:11:44
drmeister
It's a long article but it promises something that I didn't know was possible. A shared mutex that doesn't suffer false sharing up to a point.
3:14:38
Colleen
Bike: karlosz said at 2020.11.21 04:35:34: i changed CST-to-AST so that there should be only the lexical variable associating with the rest argument im pretty sure
3:14:38
Colleen
Bike: drmeister said 20 hours, 54 minutes ago: I hit something in the concrete-syntax-tree code. When compiling babel gbk-map.lisp in the add-atoms function - the local function traverse calls itself in a deeply recursive way. 16000 times at least.
3:14:38
Colleen
Bike: drmeister said 20 hours, 50 minutes ago: It's easy enough to fix with ulimit -s 32768 but should we do anything about it?
3:14:38
Colleen
Bike: drmeister said 18 minutes, 56 seconds ago: So - it turns out that our SharedMutex had a flaw. I don't know how any of the multithreaded code was working.
3:14:38
Colleen
Bike: drmeister said 18 minutes, 2 seconds ago: The thread sanitizer showed the problem immediately. I added a new implementation of SharedMutex and I'm testing it now.
3:18:14
Bike
https://irclog.tymoon.eu/freenode/%23clasp?around=1575988711#1575988711 something like this
3:25:30
no-defun-allowed
https://github.com/google/sanitizers/issues/814 is still open - the cyclic locks /may/ be a false positive.
3:30:27
no-defun-allowed
It looks just like the implementation on Wikipedia, so if that has a cycle... but a faster lock would be nice.
3:36:49
drmeister
no-defun-allowed: Thanks for that - that's a good find. So the cyclic locks are probably a false positive. I like the idea of avoiding false sharing with the shared mutex.
3:38:14
no-defun-allowed
What I can make out is interesting - caches and atomics that aren't CAS are above my pay grade still.
3:42:17
drmeister
After changing the SharedMutex implementation I don't get any more thread sanitizer alerts from the C++ code.
5:32:20
drmeister
I'm going to turn compile-file-parallel on again. The broken SharedMutex implementation put us in the "How the hell did anything work?" territory.
5:33:00
drmeister
Now I've got a better implementation and I don't have a good way to test for problems other than to turn the thing on again.
8:32:15
no-defun-allowed
I'm trying to build cmps for some reason; currently there is an error in clasp/main/clasp_gc.cc as TheNextBignum_O isn't defined and it tries to take the sizeof and some offsetof on it.
8:53:51
no-defun-allowed
"../../src/gctools/hardErrors.cc:46 Bad client pointer 0x7feca8f65990" Maybe not then.
9:47:08
no-defun-allowed
I figure the stamp numbers are addresses into the big table; so I tried to build cboehm to run the generation code, but that also appears to go wonky.