libera/#commonlisp - IRC Chatlog
Search
4:27:43
lukego
SLIME really needs a way to give you a hint when the long computation you're waiting for has stopped because SBCL ran out of heap space and discretely printed a nasty warning in *inferior-lisp* before waiting with the socket open at the ldb> prompt
5:30:13
beach
lukego: SLIME is good, but it is not great, and some of the limitations are probably due to intrinsic problems with the technique being used.
6:17:38
lukego
beach: True. But that sounds a bit like a fortune-cooking comment that could apply to any project :)
6:19:29
lukego
and in this case I'm not sure that it would be advantageous to have the tooling running inside that Lisp image that has run out of heap space (or e.g. heap corrupted) and landed at the LDB prompt. The fact that it's partitioned into a separate subprocess is kind of handy because I'm restarting the Lisp process multiple times per day but never losing my editor state, since my Emacs is a lot more stable than my Common Lisp :)
6:20:47
lukego
is the long term vision for mcclim ecosystem to run all of the Lisp code together in the same image? Or to partition it into multiple images e.g. the way HEMLOCK had REMOTE stuff to manage an "inferior" lisp process way back when?
6:25:09
lukego
This is btw the main topic on my mind this morning: where should the inter-process/inter-language boundaries in my system be? which parts are best integrated, which best separated, and which doesn't it matter? Interesting in the context of e.g. SMT solvers, machine learning toolkits, data visualization, etc, where the benefits of integration need to outweigh a *lot* of extra work reinventing stuff that's available separately
6:26:33
beach
Right, our Common Lisp implementations are not stable enough, or safe enough, to handle an IDE in the same image.
6:27:08
beach
But it seems like the wrong approach to take for granted that our Common Lisp implementations are not great, and work around it, rather than fixing the implementations.
6:29:16
beach
I don't think that there is an agreed-upon vision for McCLIM. I know what I want, and I believe scymtym shares this vision. I also know that Shinmera (for example) does not share it. He once said something like "I will *never* use an editor that runs in the same image as my Common Lisp system".
6:30:01
lukego
My mental model of Lisp is also "I should restart the image at least once per day or week for the sake of hygeine" but I don't want ot restart my editor so often
6:31:13
hayley
My mental model of Lisp is that, would someone implement htop in CLOSOS for some reason, the machine should always have the exclamation mark that htop adds as a suffix to long uptimes.
6:31:16
beach
That is a weird model, but also probably due to restrictions of our Common Lisp implementations. Just as I don't restart my operating system even once a month, I think I shouldn't be required to restart my Common Lisp system. But the way things currently are, I pretty much have to.
6:33:16
beach
lukego: Here is how I see it. A Common Lisp implementation should have multiple first-class global environments, and instead of restarting your entire system, you might be required to trash the current first-class global environment if you (say) delete an essential function, and create a new one.
6:47:21
lukego
I know. I see lots of benefits to the fully integrated approach, and lots of benefits to the full separated approach. I'm not sure if your model is a best-of or worst-of both worlds approach?
6:48:15
beach
Right, it's hard to tell. But I know of only one way to figure it out, namely to try it and see.
6:49:02
lukego
The expected reward isn't high enough for me, but that's a very uncertain quantity of course.
6:50:48
beach
This vision I have has resulted in 2 papers per year for 8 years, and I can't think of a better line of research than that.
6:57:59
lukego
but this topic is at the top of my mind right now. mapping out my own personal computing ecosystem in terms of lisp, C, R, Julia, python, etc. what to use for which tasks, and how to make them interact? I spend a lot of time bothering about this stuff -- and that's a very real cost that you don't have when you focus on a pure Lisp stack.
7:01:37
lukego
in the machine learning landscape it seems to me like some things benefit hugely from tight integration, e.g. quickly iterating gradient descent optimization over automatically-differentiated functions written directly in the application's language e.g. accessing internal data structures to guide the process. Julia has this via Flux.jl and Zygote.jl and people rave about it.
7:02:45
lukego
I was tempted to write my application in Julia to access that stuff but current feeling is that reinventing it in Lisp would be less work than (inevitably) reinventing lots of Lisp-isms that I would miss in Julia e.g. Emacs integrations.
7:04:31
lukego
On the other hand it seems like MCMC and neural network kind of stuff doesn't necessarily have much advantage of doing inside the application. they are like extremely special-case virtual machines that you configure and run but don't really do any interesting application-specific things inside (from what I can see.) a bit like SMT solving. Maybe that stuff should just get shelled out to an off-the-shelf solver in C++/Python/ec.
7:06:26
lukego
beach: btw I respect the work that you are doing and I recognize the dream, having had the same one a long time ago, but I'm just in a different corner of this gigantic expanding computing universe these days. I regret having these nay-saying interactions with you, but it seems rooted in your value judgements of e.g. Emacs based on your own subjective criteria, so I'm not sure how to refrain from reacting :)
7:15:17
pjb
In any case, eventually this should lead to the same user experience: if you botch an environment in your CL image, you will still kill it and start a new envionment…
7:15:26
lukego
(Maybe I should try harder to avoid naysaying. SICL does look like the foundation for future generations of Lispers from where I sit and that's hugely important. I'm just a bit more myopic and not often seeing much beyond a ~ 5 year time horizon.)
7:35:20
beach
lukego: You can be as much of a naysayer as you like. Like water off a duck's back to me.
7:36:05
beach
lukego: And I am not trying to convince you. What I write is for the benefit of all #commonlisp participants. Otherwise, I would do it in a private exchange with you.
7:40:21
lukego
that's the thing, I don't like being a naysayer. I suppose that I feel baited. I'll take a break on that note :)
10:21:07
greyrat
Lukego: I am migrating my Julia dev env from vscode to emacs, and I am quited pleased with eglot (whose non-REPL experience seems as good as vscode, or better for me since I can customize a lot of stuff in elisp) and Revise.jl (so that I can run a standalone REPL that automatically gets updated when I save my edits, i.e., an alternative to SLIME). (Not to mention, copy-pasting using evil text objects for 'cells' between '##' lines helps
10:21:08
greyrat
a lot with using a non-nitegrated REPL.) (jupyter-emacs and org-babel also used to work somewhat well, but recently have broken on my new emacs 28.)
10:21:58
greyrat
What I still haven't found is how to launch a graphical REPL without firing up the whole vscode Electron machinary, as the TUI REPL isn't as nice for working with plots and images.
10:29:27
greyrat
lukego: another thing I am working on, is to create a general purpose HTTP API using Jupyter kernels that can eval code in any languages. So for example, I'll load up Julia kernels with the needed dependencies, and then I can easily "shell out" to them in any language. This seems the easiest way to interop between different languages that sometimes have expensive startup costs, to me. What are your thoughts? (I currently have a
10:29:27
greyrat
self-implemented (not using Jupyter) HTTP API for zsh, and it has been perhaps the single most useful thing I have ever built. Using that, I can easily execute zsh code in, e.g., elisp, with a syntax as nice as `(z custom-zsh-function (elisp-functions-are-evaled-and-their-results-quoted) (identity some-elisp-var) automatically-quoted-string)`: `(z mkdir -p (identity my-path))` . I currently use some zsh helpers for running pipes (or
10:29:28
greyrat
directly use `(z eval 'produce | consume')`, but it's just a matter of doing the work to support `(z produce | consumer)` directly.
10:31:44
greyrat
(I have a lot of zsh code, so zsh takes ~6 seconds to boot up on my machine, which is my I don't just use `zsh -c '...'`. The same problem applies to Julia; Python starts up fast enough that I have not needed a server for it yet.)
10:59:10
greyrat
beach, pjb: After our previous discussion of the limits of scoping on eval, I have distilled my fundamental question to this: https://stackoverflow.com/questions/69334197/common-lisp-how-do-i-set-a-variable-in-my-parents-lexical-scope
11:00:49
hayley
This is intentional and done to preserve modularity (good luck debugging a function which diddles other stack frames) and the possibility of compilation (to an extent).
11:03:08
hayley
To be fair, I made up the former part, I don't know if it's true (though I suppose it is). But the latter part is a common argument for why you can't do such things.
11:05:14
hayley
Dunno about that, myself and Gnuxie have written substantially about such things. It's plain unreasonable to expect everyone to implement such a feature to me.
11:09:23
hayley
greyrat: The way I see things is that modular design makes individuals _more_ able to manage complex projects, and not less able. So corporations which want to make motivated individuals incapable of replicating corporate efforts would rather promote anti-modular concepts.
11:14:07
greyrat
hayley: Both approaches are valid, I do not say that non-modularity is somehow good. I am saying some places benefit from non-modularity. The costs of non-modularity can be much higher for corporations, and this coupled with their social elements of conservatism and conformity, produces the current world where big corporations are very bad positoned to use such non-modular, complex code. It's all about choosing the right tools for the
11:15:19
greyrat
seok-: No idea, if it's relevant to you, but Nyxt might be of interest. It doesn't even work well on macOS though, so you really want a Linux machine for this one.
11:16:02
hayley
ACTION uploaded an image: (11KiB) < https://libera.ems.host/_matrix/media/r0/download/matrix.org/JIAFDPcycpHnNgJEvfctzKkZ/bruhcha.jpg >
11:18:37
hayley
On the contrary (again), big and established corporations can basically bail out bad design decisions. Whereas for mortals, such as myself, one would be wasting precious time trying to make it work.
11:25:16
greyrat
hayley: "bad design" is by definiton bad. I usually use such scoping hacks for well-behaved abstractions. Kind of like how Rust uses a lot of unsafe code in its internals, but presents a safe API to the user.
11:35:25
hayley
Having written a smattering of unsafe performance hacks, that is still difficult to maintain.
11:42:57
greyrat
xach: the best case scenario is to have different "dialects", ala racket, with different optimization needs and safety enforcement. "Glue" code that does not run in a hot loop can be very liberal with its (non)performativity.
11:45:09
Xach
Sure - if you don't like how the language is defined, you can make one more to your liking.
11:46:30
hayley
I don't think Racket even has the capabilities for such bogosity, unless your language is almost entirely interpreted.
11:59:01
beach
Wow, that is some very nasty code. I guess the fact that it is possible in Python explains the factor 50 performance penalty of Python over Common Lisp.
11:59:40
Xach
It reminds me of how the naive idea of "if I compile 'slow' language X to 'fast' language Y, my program will run faster" is false
11:59:54
beach
I mean, that such things are generally possible. Not that this particular possibility is responsible for it all.
12:01:44
beach
Right, there are language-design decisions that can make it extremely hard to write a compiler that generates fast code.
12:05:00
hayley
It probably could be done efficiently if you tried hard enough (you could use debugging information to diddle call frames, but then you would have to handle every inferred type possible being violated), but it is still a bad idea.
12:05:58
_death
greyrat: one way to do that in a principled way is to pass a closure that sets the variable, (lambda (new-value) (setf my-var new-value)).. you can create a macro so that you write (setter my-var) instead
12:06:13
beach
hayley: But you would also have to prevent the compiler from deleting dead variables.
12:08:15
hayley
beach: Also true. Would it affect performance substantially? You could spill dead variables and be done with them.
12:08:18
Xach
Hmm, can anyone reach https://www.lrde.epita.fr/~didier/ ? I can't get Didier's software.
12:09:48
beach
hayley: Not to bad in this case, as we decided the other day, but variables could also be replaced by strength reduction and such, so you would then have to keep them synchronized in loops.
12:11:53
beach
I think this question is another piece of evidence that it is a bad idea to let people without knowledge of compiler design make decisions about language design.
12:25:32
hayley
With regards to being careful with unsafe code: from experience at best I find I need a piece of paper, in order to convince myself that my code works as intended. At worst, I keep finding mistakes (as I have been today, while testing regular expression compilation), which is not something you want to be doing for long.
12:27:28
hayley
Somewhere in the middle is using external automation tools. For example, with concurrent programs, I implement a model in TLA+ and wait a few minutes for it to check that the program won't break certain invariants. From what I read, the miri interpreter for Rust can do vaguely similar things for unsafe code.
12:29:33
hayley
(The latter does not exhaustively check models, but it can detect races and mistakes in manual memory management. But, otherwise, you need tools outside the programming language to avoid problems that are usually not problems.)