freenode/#sicl - IRC Chatlog
Search
5:29:00
beach
If that attitude is shared by (say) France, it means we are confined not so that we won't catch it, but so that we won't clog up the hospitals too soon.
5:29:43
beach
And then, the risk of me and my (admittedly small) family not surviving it is significant.
5:31:46
alandipert
my understanding is that the rationale for confinement is to slow the spread, so as to not overwhelm hospitals, but that over the next year, most people will have gotten it
5:34:04
alandipert
i'm ok, doing what i can to not get the virus in the next couple months. i'm in the asthma risk category myself
5:41:44
alandipert
in more pleasant news i've returned to JACL and resolved an irritating bug in my list reader. it manifested as an error when a comment preceded a closing paren
5:42:21
alandipert
...and was happening because i was stupidly parsing the stream in the list reader, without accounting for reader macros
5:43:26
alandipert
although i've also started the DECLARE things i need to start to write the reader in lisp while preserving the async business, so that's neat
5:47:15
beach
I try very hard to cut down on the collective maintenance burden of Common Lisp code, so seeing yet another implementation-specific reader would be an indication of yet another failure of my plan.
5:48:58
alandipert
i'll look at eclector, if it doesn't work i'd be willing to bet it's not your fault, but the related ot the weirdness of JS and the fact i don't have input streams
5:54:32
no-defun-allowed
I think implementing an input stream, or at least equivalent functions to read-char and peek-char on a string and an index number, would make reading much simpler.
7:54:46
splittist
I spent a good deal of my hacking time yesterday trying to remember enough elisp for a command to turn a let into a let* . This should pay off in about 4 years...
8:14:23
heisig
splittist: There was a paper on "Pattern-Based S-Expression Rewriting in Emacs" at ELS 2019.
9:57:01
no-defun-allowed
Silly idea with regards to method dispatch: If one has access to the call counts for each method, could dispatch be made faster by placing methods that are called more frequently after fewer conditionals?
9:57:54
heisig
The CPU will do that automatically. We shouldn't reinvent branch-prediction in software.
9:58:11
no-defun-allowed
From the point of view of the compiler, it would be emitting comparisons that separate the total call counts in half, not the space itself.
10:00:39
no-defun-allowed
Strange then. I made my bytecode dispatch notably but not significantly faster by trying to prioritise defined bytecodes over undefined codes; though I did also add some other optimisations at the same time, which could also have been the cause.
10:01:34
beach
In SICL, the classes are numbered, and the dispatch code does a binary search on the methods.
10:03:47
no-defun-allowed
What would you call the "middle" values that the class numbers are being compared to?
10:05:08
jackdaniel
beach: re CLIM input streams -- I think that they did a poor job with porting specification from silica - multiple parts are underspecified (most notably a concept of input buffer and a difference between basic-input-stream and extended-input-stream; that is that they are separate classes where extended input stream must not inherit from the former)
10:06:04
jackdaniel
it is possible to infer missing information after careful reading, but it is not obvious -- the best proof of that is the fact that McCLIM and Franz CLIM got it wrong
10:06:56
no-defun-allowed
I would arrange for those to partition the classes in half by the number of times their respective methods are called, instead of the number of methods.
10:07:37
beach
no-defun-allowed: Sure, but what information about the argument do you use for the comparison.
10:07:50
no-defun-allowed
But I believe heisig that branch prediction does something like that, and what I observed could have been caused by other optimisations I introduced.
10:09:00
beach
no-defun-allowed: Neither the argument or its class has information about frequency. Besides, it would be different for different generic functions.
10:11:35
no-defun-allowed
Indeed. I would either have to collect information on frequencies and recompile the discriminator function every so often to reflect those frequencies, or have the user estimate beforehand, neither of which is good.
10:13:25
beach
Now, you come in with an argument, let's say it is an instance of some class C, and let's ignore EQL specializers for the moment.
10:13:55
beach
What value do you derive from C that you can use to split your list of effective methods in 2?
10:15:00
no-defun-allowed
The list would be present at compile time (either by the user giving estimations on relative frequencies, or the discriminator function being recompiled periodically), and we would use the usual binary search technique.
10:15:58
jackdaniel
no-defun-allowed: something like an alist in the generic function? (4 . #<class foo>) ?
10:17:21
beach
no-defun-allowed: You have a list of effective method functions sorted by frequency, right?
10:18:47
no-defun-allowed
What I would change, and only what I would change, is how the midpoint numbers are chosen. It is late and I probably am not explaining myself well; could I describe an example?
10:19:04
beach
no-defun-allowed: So that means that the class numbers in that list are no longer sorted.
10:24:07
MichaelRaskin
heisig: are you saying that computed jumps work well or that depth of the conditionals tree is irrelevant? I think the idea was like a Huffman code, about the depth of conditionals.
10:24:56
no-defun-allowed
Say I had classes 1 through 4, and the method specialised on class 1 is called as many times as those for classes 2 through 4 are, combined. It would be preferable to generate (if (= class-number 1) <invoke class-1 method> <dispatch on the other methods>) instead of (if (< class-number 3) (if (= class-number 1) <invoke class-1 method> <invoke class-2 method>) ...)
10:26:17
no-defun-allowed
That is probably an unrealistic example though, but I think that partitioning the methods by their calling frequency would lead to fewer comparisons on average.
10:27:34
beach
So you win only if the most frequent (few) effective methods are called almost all the time.
10:29:24
heisig
MichaelRaskin: My suspicion (but I haven't run any benchmarks) is that optimizing the order of computed jumps is not necessary on a modern CPU.
10:30:17
heisig
And also that the depth of a tree of conditionals is irrelevant as long as certain branches are taken more frequently.
10:31:26
no-defun-allowed
Thank you for pondering about it. That would concordant with my test, because invalid bytecodes should never be executed, and I gave every valid bytecode the same weight.
10:42:27
no-defun-allowed
I have tried using CASE in SBCL, which does create jump tables, but it appears to be slower than "binary" search. Maybe I need to investigate if the form generated will affect the table generated.
10:53:25
no-defun-allowed
I suspect there are are a few ways to write the CASE form, and some won't produce optimal code.
10:59:31
MichaelRaskin
heisig: I thought it is 1 cycle / conditional even in the best case, so the depth might matter
11:12:21
beach
Maybe I am not understanding what is being said, but I think heisig is saying that the CPU has several entries for branch prediction, and that several tests in a single tree will be predicted.
11:28:50
heisig
MichaelRaskin: One cycle is ridiculously cheap, and only matters if instruction throughput is your bottleneck.
11:29:44
heisig
But typically the main problems are pipeline stalls or memory references with bad locality, not instruction throughput.