libera/#commonlisp - IRC Chatlog
Search
10:16:37
akater[m]
White_Flame: Dispatch is useful but it doesn't require CLOS. When only a single method is ever going to be applicable per gf call, you may just as well roll your own interface to recompiling a defun. I think I'd do that without thinking twice.
10:17:59
beach
And with generic functions, the methods don't have to be physically in the same place.
10:20:26
beach
akater[m]: PRINT-OBJECT is an example of a generic function where typically a single method is applicable for every call, but it is totally essential for application code to be able to add a method to it, without editing a global function.
10:24:44
akater[m]
beach: With print-object, it's trivial to imagine a use case for :before or :after method
10:27:21
akater[m]
All right, we seem to disagree. CL is missing a simpler inheritance-less dispatch — for legit reasons, I think (there's likely no single good enough design for this) but it is this fact that gives rise to one “Let's make all functions generic!” attempt after another. Meanwhile, people will keep writing such dispatching mechanisms (an reasonable ones) because they want expressive user-space optimization, dependent types and so on.
10:27:46
beach
I think you understood that when I mentioned PRINT-OBJECT, I was giving an example of a generic function with many methods, where typically only one is applicable, but where each method belongs to a totally separate "module", so that it is not reasonable to modify a global definition.
10:29:10
hayley
I disagree that any "user-space optimization" is any good, even if we limit ourselves to dispatching without inheritance. Merely inlining methods based on type inference with no regard for handling redefinition is definitely a step down.
10:31:25
hayley
Hot take: most of the perceived performance benefits of limiting dispatch techniques are disproven by reading up on the Self compiler. 30 years old isn't old enough to be conventional "wisdom" I guess.
10:35:46
hayley
I mean, polymorphic inline caches nail down the set of actually used effective methods. Using dependent types...is another thing which wouldn't fit in the Common Lisp standard unless you are willing to specify a dependent type system too.
10:56:10
akater[m]
beach: E.g. ones that make Common Lisp programmers migrate to Julia, or motivate them to write libraries like static-dispatch, polymorphic-functions and so on.
11:03:16
hayley
I try not to think about people who think that static "dispatch" is related to ad-hoc polymorphism.
11:04:54
hayley
polymorphic-functions looks great actually, but it requires a few compiler extensions which don't look like "user-space optimization".
11:05:44
mfiano
After quite a bit of use, I am no longer a fan of these semantic-altering method dispatch extensions.
11:07:24
hayley
PCL mentions it in the chapter on generic functions; that there is no real dispatch done on "overloaded methods" as they exist in Java and C++. They could well be sugar for separate functions (and, in fact, the Java compiler desugars methods internally).
11:08:04
hayley
And, as there is no standard type inference algorithm for Common Lisp, which methods are chosen due to type inference are more or less implementation-defined.
11:09:36
hayley
(But again, polymorphic-functions propagates type information between functions, and it doesn't mention handling function redefinition, so I assume it would not work.)
11:12:36
akater[m]
Well, my point is, people do write their own dispatch methods, this is not going anywhere. And I still think method combination according to inheritance is the core of CLOS and if it's not actually utilized, and there's no vision as to why it might be, no need to get CLOS involved.
11:13:33
hayley
What the Julia language does, to my knowledge, is use a parametric type system, and the compiler generates method code for each instantiation of a function type. The compiler also notably tracks dependencies and recompiles code if types and/or method definitions change.
11:13:49
mfiano
hayley: Right. At least in Julia, new code will be jitted as types change at runtime. These CL lbraries alter the semantics of generic functions in this respect.
11:14:57
akater[m]
This is going to be one single point where I disagree with common CL users' wisdom. But given how underrated define-method-combination is, I don't mind it.
11:18:40
mfiano
But yeah, I think we are on the same page. I can no longer think of using such libraries, as the result may differ from compile-time to run-time, or even implementation to implementation. It has resulted in me chasing down some hard to find bugs in the past.
11:20:13
hayley
I wouldn't mind more JITing in Common Lisp compilers, but there is the issue that, with some things that a JIT might speculate on, performance "ramps up" or is otherwise inconsistent.
11:20:21
mfiano
When I think of performance, the first thing I consider is programmer productivity, not efficiency of computer resources. It is, after all, why I transitioned to CL from Python a couple decades ago.
11:21:40
hayley
Though polymorphic-functions-esque "JIT"ing, wherein we just generate specialised code for instances of a polymorphic function, is definitely safe to do, as performance only drops while specialised code is generated again.
11:21:52
mfiano
Yes, I think I favor a consistent execution performance over the nightmares profiling JIT code can incur
11:22:25
hayley
(I believe that to be the case as redefinition of functions only occurs with someone poking at it, and in that case, your typing and/or thinking speed is the limiting factor.)
11:25:22
mfiano
I will take your word for it. Redefinition and performance conflict each other in so many places in my experience.
11:25:58
hayley
On the other hand, things like generic function caches and call-site snippets generated for generic function dispatch (aka polymorphic inline caches, again) still ramp up. But I guess the variation is small enough that people don't care.
11:27:30
mfiano
Like, take #'standard-instance-access for example, where we can instruct CLOS on how to locate slot values. We could, and I did before, write a metaclass to generate inlinable accessor functions like structs, which would allow for redefinition, but redefinition of superclasses would pose a problem for live instances.
11:33:28
mfiano
For that to work, you would have to severely constrain the user by enforcing the sealing of classes or some such, which is what I did, but it made sense since superclasses were implementation details in the the package.
11:35:19
mfiano
and I had to present the class definition in a macro to prevent users from defining derivations of this optimized type.
11:35:32
hayley
Redefinition could be handled on a path which handles type errors, provided you check types.
11:36:14
hayley
The slow path would be used until the function is re-compiled, either by the system or by the programmer.
11:41:52
hayley
I'm only really aware of how obsolete instances are handled in SICL, so I can only comment on that. Each, say, "version" of a class has a unique stamp number.
11:42:34
hayley
So the fast path would be used if the object is a standard instance, and the stamp of the object is the same as the stamp of the class version we compiled against.
11:44:23
hayley
The slow path would be used to detect an obsolete instance, in which case we update the instance and retry, to detect obsolete code, in which case we use a slower lookup procedure (and possibly queue the function for recompilation), or signal a type error.
11:45:30
mfiano
Like how al ibrary might write something that utilizes S-I-A in a redefinition-safe way
11:46:15
hayley
I think this procedure would nearly work with user code, except that you would have to be able to detect obsolete code as well as obsolete instances.
11:47:16
hayley
A normal generic function for the slow path would never have obsolete code, come to think of it. So perhaps use that for the slow path?
11:47:35
hayley
The MOP dependency protocol, to my knowledge, does handle obsolete code for generic functions.
11:55:17
hayley
mfiano: Though I think programmer productivity can be improved by having a compiler which generates fast code, as you can get away with writing a simple but inefficient algorithm on some workloads. But of course the result isn't an efficient program.
11:57:13
hayley
Another thing: a few days ago I needed to modify a A* implementation in Python to track sets of visited locations. Copying all the sets took significant time according to the profiler. So I asked my colleagues if there was a HAMT implementation in Python.
11:58:27
mfiano
I haven't studied it too deeply yet, but have you seen this? https://www.gdcvault.com/play/1022094/JPS-Over-100x-Faster-than
11:58:35
hayley
But there is a problem with that idea, at least when you use CPython: there is no way a pure Python implementation would be fast, and an implementation using C extension would increase the risk that we cannot build the program somehow.
12:01:25
hayley
In my situation, I don't think it would work too well though. The main problems are that substantial time is taken while incrementally loading the world from Minecraft, even with caching, and the cost function I use does not expose many "symmetries" to make use of.
12:05:46
hayley
Experimentally, runs in a straight line tend to only be a few blocks long. But, if I had more time, a proper investigation would interesting. I had to replace all the data structures, as someone only tested the implementation on little 2D mazes and not 3D open areas, and so everything blew up in size.
12:06:48
hayley
I nearly did implement something like jump points, but the points were computed while searching, rather than before.
12:08:44
hayley
Someone told me about how another group used multiple processes to retrieve world data, and my stupid chunk-at-a-time cache was apparently faster. So I guess there is not much you can do to optimise Python code.
12:09:12
mfiano
I haven't used Python in a couple decades and can't remember how slow it is. Not sure how close to acceptable algorithm complexity will buy you over there. I just remember it being "slow", whatever I was comparing it to at the time.
12:10:36
hayley
Yeah, that was our conclusion. We wanted to use PyPy instead, but Numpy would refuse to load for whatever reason. (They were pretty surprised when I mentioned that Common Lisp had multi-dimensional arrays out of the box.)
12:11:57
pjb
My bad experience with python that it didn't support thread on the platform I needed it (OpenWrt/mips), so in the end I wrote it in C, but it would have been faster to do it in CL and use ecl from the start.
12:12:43
hayley
On paper, it is true that performance is a property of implementations and not languages. In practise, it is still true, because you can write a fast compiler for even "slow" languages like Python and JavaScript. In practise (but for real this time), you have to be lucky for programs to not depend on behaviour of the slow implementation.
12:13:47
hayley
Common Lisp (hey, some wise people thought it was hard to compile in the 80s, and some less wise people think it is now) and JavaScript got lucky, as there have been multiple implementations of either for a long time.
12:14:39
mfiano
I vaguely recall Guido refusing to support TCO many years ago, for some crazy reasons, probably an excuse born out of his early design mistakes making it difficult.
12:16:03
hayley
In #lispcafe a video came up about some "software drag race" where the author said Lisp used bytecode interpretation. More amusingly, the rest of the video dragged on about the bytecodes employed by the Java and C# virtual machines, as if they weren't immediately discarded by the compiler.
12:16:21
hayley
Something like http://funcall.blogspot.com/2011/03/tail-recursion-and-debugging.html I guess?
12:20:23
hayley
I found it more amusing that the same misconception was nearly carried out on Java and C♯.
12:21:08
mfiano
I recently recommended to the Julia landugage team to make their disassembler more like cl:disassemble. They were very surprised to learn it showed code size etc (on SBCL) almost as much as they were that an "interpreted language" could have such a feature.
12:21:11
hayley
In the end, of course, the benchmark numbers reflected the reality wherein both implementations using optimising compilers.
13:25:22
jcowan
In Femtolisp, which is not quite CL and not quite Scheme -- and is interpreted. So the idea of Lisp *compilers* probably wasn't in view.
13:27:15
jcowan
Oh. JVM bytecode is by no means discarded: all cold code is interpreted until the JIT kicks in. The CLR bytecodes are never interpreted, though.
13:30:28
hayley
Right, yes. I should elaborate: the subject of testing is a method run in a tight loop, which the JVM would run the optimising compiler on. So performance is not a function of bytecode design.
13:33:17
mfiano
jcowan: Speaking of, that seems to be you in the first comment of the above article :)
15:11:04
mfiano
Who is "they"? How can _you_ claim that, with its borrowing of many Common Lisp specific features?
15:35:27
jcowan
markasoftware_: csc compiles all code it enters in a just-in-time fashion, whereas javac does that only with hot code, using a bytecode interpreter for cold code.
15:35:46
mfiano
jmercouris: It's obviously not _a_ Lisp, but it has more in common with (Common) Lisp than most modern languages. Even its dynamic type system is a weird hybridization of structural/nominal typing in much of the same way CL is.
15:36:34
mfiano
and layers on top of that have many similarities borrowed from CL. Therefor your comment to me just sounded like a SLW rant except with no basis
15:38:55
random-nick
I don't know if it still does that, but .NET used to have a windows service which compiles registered CLR .dll files to native images
15:54:42
jcowan
Ah, thanks. I am glad to see that Common Lispers see Schemers under the sign of peace.
16:00:21
mfiano
https://matrix-client.matrix.org/_matrix/media/r0/download/jews.rip/KKZVStrdeyhjHvfEApPkdCkA
16:06:28
mfiano
But, for the record, I am not a Julia user. It is a decent language if I ever needed a non-Lisp for data science. It surely is a better choice than Python, R, or MATLAB in this regard, but I am not a data scientist. I use pragmatic in that I use the language that gets the task done. It's just coincidental that that has always been CL :)
17:03:31
jcowan
Anything of the from "smug $TOPIC weenie" is inherently offensive, unless reclaimed by the people it describes.
17:21:19
jcowan
PyCall and PyJulia form a pretty good bidirectional bridge, it would seem. Perhaps "Reach for Julia when CPython is too slow" is better advice than "Reach for C when CPython is too slow". Also, Julia's interface to C/C++/Fortran doesn't involve writing fugly glue code in C.
17:23:02
jcowan
I especially like that you can transparently pass a Julia functions as an argument to a Python function and vice versa, something that is usually treated as Just Too Hard.
19:01:43
moon-child
jcowan: 'just too hard' it's certainly not _hard_ to implement, but you need to somehow manage the lifetime of the closure
21:56:45
jcowan
I came up with an idea for a non-portable extension of CL. It may be doable with the MOP, but I never learned the MOP. It looks like this: (define-type-class name type-specifier). The instances of such a class are all the objects which belong to the specified type. Otherwise they work like built-in types (or perhaps I should say they are built-in types: no slots and no constructors. However, parent types are possible, in
22:00:07
hayley
(beach: For what it is worth, I would believe that the URL was an honest mistake on mfiano's behalf. Usually you don't see server URLs on Matrix, so it is hard to spot bad servers. And there are some of _that_ kind of person present in Lisp rooms; I know cause I had to get rid of some of them.)
22:01:10
Gnuxie
( there's some of those people lurking in this channel for sure for what it is worth )
22:03:23
hayley
What does rewriting the parent types achieve that inheritance doesn't achieve? Is it that classes defined with define-type-class don't inherit any slots or anything like that?
22:06:30
slyrus
any vellum and/or teddy users around? Other data-frame like libraries I should consider?