freenode/#sicl - IRC Chatlog
Search
1:55:24
no-defun-allowed
Click's choice to keep keys "pinned" in entries makes reasoning about updates much easier.
1:58:40
no-defun-allowed
For example, one thread could be updating an entry, while another removed that entry, then replaced it with an entry with a different key but the same value. Then the former thread could CAS out the value, and end up with the new key with a value intended for the old key.
2:04:27
no-defun-allowed
The "insert tombstones only when group is full" concept is also hard to apply with multiple threads, as two threads inserting could fill a group, without either having observed that it was full.
2:40:28
no-defun-allowed
In general, Click states that there's no consistent state of the table, so it is unlikely that we can determine if a group is full.
4:06:36
beach
no-defun-allowed: Some day I will understand what you just wrote. But I haven't read up yet.
4:08:26
no-defun-allowed
In short, it may not be possible to use the linear probing technique concurrently, and Click's table *depends* on a property I was trying to get rid of (that keys stick in entries without copying).
4:10:07
no-defun-allowed
So my sketch for a mostly-concurrent fast hash table is probably broken in many ways. Oh well.
4:20:47
no-defun-allowed
I thought I had an idea to avoid weird state transitions, so that (for example) an update operation and a delete operation would occur sequentially, then I realised I had re-invented the spinlock.
4:21:13
beach
Designing a good data structure can take years. It took me 30 years to design what is now Cluffer.
4:38:24
no-defun-allowed
Although, locking whenever deletion is involved is better than eventually forcing a resize, so re-inventing spinlocking isn't the worst thing one can do.
4:38:57
no-defun-allowed
(At least to me, because the throughput of my program appears to plummet while a hash table is being resized.)
4:40:46
beach
I see a multitude of hash table implementations, each one adapted to a different use case.
4:40:50
no-defun-allowed
Or, at least, that my code stores information on where to retrieve objects in concurrent hash tables, and eventually deletes an entry after the latest version was found.
4:43:43
no-defun-allowed
As mentioned yesterday, for every other use case, Cliff's table is great. But not for mine.
4:45:38
no-defun-allowed
(And any attempts to "break up" that table haven't worked, because an efficient search, in my opinion, would avoid sending requests for all but the newest versions, and the "newest version" is a global property.)
4:53:17
beach
My main mission today is to go buy food. Like last week, I want to be there when they open (in a bit more than an hour) so as to avoid as many customers as possible.
6:02:01
no-defun-allowed
The "tombstone only when group is full" part of Kulukundis's hash table only would work concurrently if we can CAS out entire groups of metadata, as that's the only way we can tell if any other thread updated the group metadata. If we want to run with 8 element groups, then the implementation has to be able to CAS 64-bit words. If we want to run with 16 elements or more, using SIMD, then that length has to be CASable
6:04:03
no-defun-allowed
AMD64 provides 16-byte CAS but not larger; I don't know if a larger group is a good idea though, as the Abseil libraries only optimise for 16-byte groups.
6:47:34
no-defun-allowed
Shinmera: I got resizing to go considerably faster by increasing the minimum number of elements copied at a time while resizing. Now Luckless is even a bit faster with my worst-case program which removes everything it puts.
7:13:44
no-defun-allowed
The only noticeable pauses with testing decentralise2 are garbage collection pauses, which are partly my own fault, for running SBCL with a large heap, and not adjusting the collection settings.
7:39:18
no-defun-allowed
Though I should go through my changes, and see what was only attempting to avoid copying, and see what actually makes sense now.
7:46:42
no-defun-allowed
There's some documentation in <https://github.com/no-defun-allowed/luckless/blob/master/documentation.lisp>, but it might be too short.
8:28:37
no-defun-allowed
Oh, and I don't know what I did, but now I blew the stack in a loop with %help-copy and %put-if-match. Isn't it supposed to not do that?
9:13:14
no-defun-allowed
Maybe the hash table is having a very bad day, where some key is still too far from the first location probed, so it decides to resize the new data table that it's copying into. Copying does call back into %put-if-match...
9:54:03
no-defun-allowed
But then that should be impossible, as %put-if-match always starts resizing if we exceed the probe limit. And it doesn't crash on my laptop, so I can assume something has gone terribly wrong with compiling everything somehow.
11:08:13
beach
This is unbelievable. I reached the limit of nested "enumerate"s in LaTeX, so I looked for a solution, and found that the "enumitem" package allows an arbitrary nesting depth. But I still get an error when I go beyond 7 or so, despite having declared that I want 10 or more. I mean, who wrote this shit? We haven't programmed like that for half a century.
11:09:09
no-defun-allowed
Doesn't TeX have...a few limits on things, like counters? But if you can declare another one, then it does seem pretty stupid.
11:10:31
beach
It has several limits on things. My theoretician colleagues think Donald Knuth is a great programmer, but that's just because they have no clue about what great programming is about.
11:11:50
beach
Different cases of where lexical variables can be found and when they are needed further along, for the purpose of register allocation.
11:12:16
heisig
To assume any particular interaction with a computer doesn't require a full-featured programming language is a very popular mistake.
11:12:30
beach
So, here I am trying to come up with some serious work on register allocation, and I run into arbitrary limits of my documentation tool. Totally unbelievable.
11:12:54
heisig
It is also a horrible mistake, because not having access to a full-featured programming language ruins productivity.
11:14:09
no-defun-allowed
One irrelevant thought is that on LibreOffice, seven levels of nesting gives you just half the page width for text, so it vaguely makes sense why you would stop there. But it's still absurd.
11:14:46
heisig
I'd be in for rewriting TeX. Given the amount of TeX I produce, such a rewrite would probably amortize well before I retire.
11:15:18
Shinmera
it has a lot of other good things like proper unicode support out of the box, ttf fonts without weird kludges, etc.
11:17:09
heisig
No, even such a rewrite can't fix all the existing libraries that are totally broken because they had to be implemented in TeX.
11:19:30
heisig
On the other hand, the consistent input format is one of the main reasons why TeX got so popular in the first place.
11:20:24
Shinmera
In any case, switching engine should be as simple as https://github.com/Shinmera/talks/blob/master/els2019-shader-pipeline/paper.tex#L519-L524 supposing you're using AucTeX.
11:20:29
no-defun-allowed
One version of the indexing package would try to emit an entry for a function named %foo, and then confuse itself when reading the index file.
11:22:08
no-defun-allowed
I could almost like the Scribble notation, where one writes @foo[bar baz]{quux} which is sugar for (foo bar baz "quux"), but it doesn't handle multiple {} parts like @foo{bar}{baz}, so I can't write @defun{cons}{car cdr}.
11:24:06
beach
Shinmera: I see the TeX-engine comment, but I don't understand what program is reading it?
11:26:26
no-defun-allowed
Otherwise, passing around objects of distinct types, like a "normal" programming language is great. But Scribble generates either HTML or TeX, so it's at the whim of those engines, and it is very opinionated on how to emit those, like doing some stuff to flush all newlines with TeX, which messes with macro writing.
11:28:20
beach
There is a possibility that I am not using the right command to set the depth, but the documentation for the enumitem package is not terribly explicit about it.
11:35:25
jackdaniel
there is plenty of literature about the time travel topic, so there *must be* something to it! ;)