freenode/#lisp - IRC Chatlog
Search
22:58:58
fiddlerwoaroof
I vaguely remember someone talking about loading every system distributed with Quicklisp into a single image as a sanity check of sorts
23:31:18
fiddlerwoaroof
Xach: makes sense, I'd be interested in a "stable" dist that only accepts pure lisp packages (no FFI) that can be loaded together
23:32:20
fiddlerwoaroof
I've occasionally tried to figure out how to host my own diet, for reasons, but never really seriously enough to have anything to show
23:33:26
Xach
fiddlerwoaroof: i had hoped that dists would be very common, with people hosting lispworks-only software, or other thematic dists, but a combination between a lack of interest and a lack of documentation and probably other factors has made it not happen yet
23:45:53
charles`
I would think if you were a company writing internal libraries you would want to host your own dist for those.
23:48:34
aeth
fiddlerwoaroof: there is a distinction... outside of systems with X Windows (where CLX exists and can use the protocol), you can't do anything graphical without some degree of FFI
23:49:01
aeth
But if someone made a graphical toolkit on top of just the OSes themselves, then it would be useful.
23:51:12
aeth
Xach: Sorry, I'm unclear, I mean zero distributed foreign dependencies. So if someone wants to just wrap the WinAPI, then that should be OK, to complement something like CLX, but OS-agnostic.
23:51:51
aeth
As opposed to something like cl-sdl2 where you have to have SDL2, a giant C dependency, at some point.
23:52:51
Xach
Oh. Well, I'm thinking of users with semi-exotic platforms, where binding to some "it's installed everywhere! (if you use linux/windows/macos)" is a failure
7:41:55
fiddlerwoaroof
Xach/aeth: yeah, my goal would be to include things like Alexandria that should reasonably be expected to be usable anywhere
7:42:59
fiddlerwoaroof
quicklisp or ultralisp would be used for things like clim that require functionality beyond the standard
7:45:15
aeth
well, more clim backends (that don't currently exist?) than clim, since clim itself would be fine
9:14:14
ludston
Is there a quick and dirty way in CLOS to wrap a struct with another struct that proxies through all of the accessors on the wrapped struct?
9:15:06
jeosol
beach: I am reading your paper. The template is used is the suggested one for ELS. Do you have a link for the template handy. I think I have seen slightly different designs.
9:19:37
beach
jeosol: I just copy my old papers and modify them. It is entirely possible that my template is out of date.
9:19:39
jeosol
I am taken by the comment in the abstract that presence of optional and/or keyword argument impacts function call performance
9:20:08
jeosol
oh ok. I recally having seen a different format. No worries, I'd pick one from the site and create a base template folder
9:22:51
jeosol
by "taken by", I mean understand how bad design may reduce performance, and certain styles are better. I'd have to read everything to get the full gist
9:24:41
beach
I don't know how much you know about the design of a typical Common Lisp system, but keyword parameter must typically be parsed by the callee for each call. The rules are complicated. For example, the same keyword argument may occur more than once, and it's the first one that counts.
9:25:37
beach
And if :ALLOW-OTHER-KEYS <true> occurs somewhere in the argument list, then no error should be signaled for unrecognized keyword/argument pairs.
9:25:43
jeosol
No, I am not a compiler guy, at all, I am mostly application focused, i.e., using the language. I have only started getting deep in the internals as I try to improve performance and get better design
9:26:26
beach
Well, as the paper says, compiler macros are often used to avoid this parsing in many cases.
9:28:11
beach
Keyword arguments are very flexible, but if you do it naively, then they are very slow.
9:28:43
beach
But the technique in the paper basically creates an automatic compiler macro for each call site.
9:29:49
jeosol
That's the part I need to probably improve. I have an object with many slots and in my first iteration, I had a constructor with more keywords that slots as I have to pass other arguments, process then, create objects that are then passed into make-instance. I could probably make things better
9:31:47
ludston
jeosol: If you're gunning for efficient code, your constraint is more likely to be the garbage collector than function/method dispatch speed in my experience
9:31:55
beach
Also, in the naive case, the callee must check whether a certain argument was supplied at all, and if not, execute the initform for the corresponding parameter. That's another test which are expensive these days. With my technique, that test can often be skipped, because it is often clear from the call site whether it is given or not. Barring APPLY of course.
9:32:51
jeosol
I have a case for an optimization that is doing table-look for the function calls, and I have feel it could be faster.
9:34:15
jeosol
beach: I think the generic dispatch are okay. The most I have is dispatch on two classes (multiple dispatch cases).
9:34:45
ludston
jeosol: I recommend you don't worry about how fast it is until it's going to be (or is) a problem, and then use statistical profiling techniques to make sure you know exactly what the bottleneck is
9:35:31
beach
PCL generic dispatch is not that great, which is why some SBCL users avoid generic functions. I think that's a pity, because it's an implementation detail that may change.
9:35:56
jeosol
I am not too worried per se, because the practical application, calls to another application is the main bottle neck and can't be optimized
9:36:48
frodef
beach: on the other hand, an abstraction that is avoided because of its implementation, has a problem one way or the other.
9:36:49
jeosol
beach: really beach. My application is CLOS heavy. It runs well though, but I haven't compared to anything else. It was just easier to build the class hierarchies that way
9:37:44
beach
frodef: In this case, the only problem seems to be that implementations were conceived at a time when memory was as fast as register operations.
9:38:15
jeosol
it will be nice (perhaps in the future) to get some documentation of these type of design considerations and why some are better than others, e.g., like examples when adding to list
9:39:36
beach
jeosol: I think you are going to need SICL, once we have implemented all those techniques we came up with. :)
9:39:44
frodef
beach: yes, if the abstraction can only be implemented in ways that are (prohibitively) expensive.
9:40:35
beach
frodef: In this case, the ONLY problem seems to be that IMPLEMENTATIONS were conceived at a time when memory was as fast as register operations.
9:41:09
jeosol
beach: that will be nice, and I'd have a clear benchmark - my application is heavy with number crunching and it seems these additions will/should show differences with base SBCL
9:41:33
ludston
frodef: I agree with beach, long-term it is not a problem that generic dispatch is inefficient in current implementations, and short-term, it is only a problem if you are CPU-constrained... Which you almost never are these days
9:44:06
beach
scymtym has an adaptation of it for SBCL, but it seems it will never make it to the SBCL code base.
9:44:55
beach
It's what the cool kids (Bike, drmeister, karlosz, etc.) call my technique from that paper.
9:46:24
beach
frodef: I don't know the details, and scymtym was a bit vague about it. But the technique depends somewhat on how SICL represents standard objects, so a fair amount of restructuring of SBCL would be needed to take full advantage of it.
9:47:55
frodef
in general, imho the "almost nothing is CPU-bound" attitude is fine for applications, not so much for the language/runtime.
9:48:46
beach
Here is another thing you may want to catch up with: http://metamodular.com/SICL/path-replication.pdf
9:49:22
ludston
If you inline dispatch, then add a new method implementation that is more specific to somewhere already inlined, you have to go back everywhere you inlined it and change it?
9:54:46
beach
ludston: There is a similar technique (but much less efficient) used by CMUCL and SBCL for MAKE-INSTANCE in particular. The call site is replaced by a call to a new funcallable instance that works only for the particular argument list of that call site. Then when the callee changes, the funcallable instance function is updated accordingly.
9:55:32
beach
But that technique involves another function call, so there will still be indirections.
9:58:32
beach
The call itself may not be a big problem, and saving on generic dispatch will help. But by respecting the call protocol, I mean putting arguments in the agreed-upon places, loading the static environment, creating the callee stack frame, etc. All that adds up.
10:02:39
frodef
btw what is the expected cost of a memory access that is a first-level chache hit (e.g. second access to same object) compared to a register access?
10:04:49
ludston
Like the ratio between a cache miss and a cache hit is like the ratio between ram access and ssd access
10:06:37
moon-child
note newer cpus have memory renamers that will put very hot memory locations in registers
10:08:10
moon-child
stack will presumably generally be quite hot, but it's not limited to that; example there shows it indirecting rsi
10:08:27
ludston
New CPU's calculate a graph of the instructions that it is running next in order to run them in parallel. A branch misprediction means you have to throw that whole graph out and start again
10:09:46
moon-child
cpus usually speculate return addresses, with high probability--also looking at stack
10:10:03
moon-child
(though shared call and data stack is somewhat harmful for security since it enables rop)
10:11:56
frodef
(btw again it seems to me there's a difference between an application developer saying "branch prediction will fix that" etc, while a run-time must be careful to limit the (undue) pressure on such CPU resources.)
10:16:30
beach
Some people disagree though. For example, stassats doesn't think that requiring two tests instead of one, for each loop over a list, is a problem. And he doesn't think registers are that much faster than the stack.
10:18:15
ludston
DAE notice that around Christmas every year, there is a surge of lisp related articles on all the programming news sites?
10:18:25
frodef
btw CPUs are becoming quite clever at optimizing... for C-type runtimes. It's fun/sad to think what the state of CPUs might have been if dynamic run-times weren't weeded out in the 80s.
10:20:43
White_Flame
there's still barely any hardware GC support (and afaik what is in there isn't even used in major languages like Java), and still no support for tagged words
10:20:48
ludston
frodef: In a direct, explicitly dynamic way: Javascript, Python and all the byte-code langs like C# and Java