libera/#commonlisp - IRC Chatlog
Search
13:27:16
Nilby
That's what I'm thinking, but I'm wondering if there's examples of how people have done that with brew or debian.
13:28:30
yitzi
I have used it in building maxima-jupyter and common-lisp-jupyter kernels, but distributed Lisp source code via brew and especially debian is a bad idea, IMHO.
13:30:18
Nilby
Right. My package is going to be just a binary, but for those systems they seem to prefer a source package which build to a binary package.
13:31:22
yitzi
Are the users of your package going to load their own quicklisp packages? Or just run your binary?
13:35:29
yitzi
Then quicklisp bundles would work just fine. One problem you may have with Debian is that you are effectively distributing the source code of all of your dependents so you'll need to include all of the licenses in the copyright file. For brew you could always make your own tap, so you don't really need to worry about getting approval from the brew folks.
13:42:04
Nilby
Right. The big pile of liscenses thing. I guess just getting a .deb or tap that people can install would be good. I see the common-lisp-jupyter and maxima-jupyter repo, but I don't see any packaging artifacts.
13:44:52
yitzi
They don't use quicklisp bundles to distribute themselves. They use it to install certain kinds of jupyter kernels. If you are looking for hints on how to do debian/brew packaging that is not a good place to look. If you want to see how a quicklisp bundle is made look here https://github.com/yitzchak/common-lisp-jupyter/blob/a29c0252a71d8f7147a6ecdbe9d656ffdb8f9fec/src/installer.lisp#L201
13:46:06
yitzi
Clasp does Debian packaging and has a hombrew tap if you want to see examples of those.
13:55:51
green_
An experiment from 13 years ago: https://github.com/ktp-forked-repos/slowlisp . I don't recommend going down this route.
14:01:19
Nilby
The whole common-lisp-controller was a very admirable attempt by great people, but it seems to have languished.
14:02:32
green_
Nilby.. you could also look at just bundling a ocicl systems.csv file and users could pull the dependencies with ocicl! It uses the same distribution backend infrastructure as homebrew.
14:05:19
green_
287 packages built and available so far: https://paste.centos.org/view/dcd69aec still a long way to go though...
14:11:23
Nilby
I'd be glad to add to ocicl, which seems promising, but I was hoping to get to a point of a single command install with a familiar/native packager.
14:25:44
nij-
In the last example of this CLHS page, where does :first-throw go? http://clhs.lisp.se/Body/s_throw.htm
14:30:45
bike
the first throw never "reaches" the inner catch. before it can, it has to execute the unwind-protect cleanup. the cleanup initiates another throw, aborting the first.
14:33:11
beach
green_: Not sure, but this could just be ASDF that converts warnings during compilation to an error to load the system.
14:35:05
nij-
So maybe in that failing system it's explicitly said that warnings are converted into errors.
14:38:52
nij-
Hmm bike still weird - (unwind-protect (pprint 1) (pprint 2)) prints 1 first and then 2..
14:39:20
bike
and with the throws, it does the first throw, then the second throw. it's just that the first throw is, as i said, aborted by the second.
14:39:42
bike
with pprint, the pprint call doesn't transfer control. it's just a normal call and that completes without trying to escape the unwind-protect.
14:41:57
nij-
This kind of catch/throw mechanism only works when the throw form is nested in the catch form, right?
14:43:01
bike
you can do (defun foo (f) (catch 'fizzbuzz (funcall f))) (defun bar () (throw 'fizzbuzz 23)) (foo #'bar)
14:43:26
bike
i think this is actually why beach mentioned catch/throw to you yesterday, because you were trying to do it with block/return-from
14:43:49
beach
nij-: Like I said to you before: You can use RETURN-FROM and GO only when the BLOCK name or TAGBODY tag is in the lexical scope. So as edwlan[m] says, you need to use CATCH/THROW.
14:46:28
Alfr
While we're at this ... something related: Is it possible to implement coroutines using CL only?
14:46:37
beach
nij-: The call stack is actually not a call stack, but a representation of what computations to do next. So if that stack is remembered and re-established later, then you can do the future computation again.
14:47:03
bike
Alfr: i don't think you could do it without a code walker/compiler extension of some kind
14:48:22
Alfr
Hm ... guess that's a no. As getting at surrounding env is a problem in making a code walker.
14:50:24
nij-
Alfr how are they related? I thought coroutines are just routines that run in threads @@?!
14:52:58
Alfr
nij-, you essentially make a continuation and yield, invoking the continuation later will resume the computation where you left off; and there you essentially have coroutines.
14:53:06
beach
nij-: Sure, threads are more powerful that coroutines, but since threads are not part of Common Lisp, you can't implement them using just Common Lisp.
14:55:12
nytpu
nij-: coroutines are non-preemptive (i.e. the coroutine decides when to interrupt itself) while threads typically are preemptive (the OS or runtime decides to interrupt them); and threads are often parallel while coroutines are never parallel
14:55:58
Alfr
beach, sure. But I think the CL out in the wild utilize OS threads and for at least Windows and Linux both creation and switching to/from is expensive.
14:56:56
beach
Sure, for the disastrous Unix-likes (which includes Windows), it is very likely expensive.
14:57:52
nytpu
the synchronization to emulate being non-preemptive is trivial, the main thing is ensuring callers block while the coroutine is running so you don't have the caller and the coroutine running simultaneously (if the system supports parallelism/true multithreading)
14:58:11
Alfr
beach, I've played with Erlang the last few days, so yeah, I do understand that it seems possible to make it rather cheap. :)
15:00:40
nytpu
Alfr: that's done by language threads not being 1:1 with os threads; IIRC you have a pool of (potentially short-lived) language threads that a set of long-lived OS threads handle, so you're not constantly creating and destroying OS threads but instead lightweight language-specific objects
15:03:56
hayley
OTP uses a "scheduler" per core, from memory, and some other OS threads for performing I/O, as it's difficult to do non-blocking file I/O in Unix.
15:06:45
hayley
I think it is reasonable that a coroutine should remember its dynamic environment (which code-walkers tend to fail at, too).
15:09:07
bike
i have not really used coroutine, but my understanding is that the yield operator essentially saves the current continuation and then returns, and then calling the coroutine again resumes from that saved continuation.
15:12:37
nij-
Nilby 1M threads? How was that possible? I remember the limit on my local machine is around 9000.
15:12:38
Alfr
nytpu, or what's the difference as beam thread have their own stack and heap etc. I know that beam it also imposes rather draconic restrictions.
15:21:14
Nilby
nij-: it's potentially possible, but in my case practically impossible, hence the crash
15:21:17
Alfr
bike, that's essentially why gilberth hates continuations, especially in combination w/ unwind-protect.
15:21:20
citizenajb
Alfr: don't people generally just use thread pools instead of creating / destroying threads (just addressing your initial comment that thread creation is expensive)? You can use coroutines in Common Lisp by combining something like the cl-cont library with a thread pool pretty straightforwardly, but it has a lot of downsides. Debugging is very
15:21:21
citizenajb
hard, coroutines are contagious (meaning you need to wrap everything with call/cc all the way down, as you have to do with standard async/await usage), exception handling is hard... I understand the appeal of the "seeming single threadedness" of async/await coding, but I've never felt it's quite the silver bullet as everyone now-a-days seems to
15:22:39
hayley
If it seems single threaded, that's because you haven't ever wanted to hold any state consistent across an await. scnr
15:24:49
hayley
Coroutines also waste the other 11 cores (or 23 logical cores); thread pools are good for short-lived tasks, but I couldn't use a pool to manage a large number of long-lived connections, without breaking up the code to manage one connection.
15:25:02
bike
Alfr: i have seen a pretty good attempt at reconciling continuations with unwind-protect, in that it's well-defined, but also like an order of magnitude more complicated than CL's rules
15:25:51
Alfr
citizenajb, hayley said what I'm after nicely, in particular keeping related code in one place.
15:26:31
hayley
Each thread should (in my opinion) do something like (loop (handle-message (read-message))), but that loop would not map to a thread pool nicely.
15:28:14
hayley
And (again in my opinion) cooperative scheduling is very anti-modular, as one misbehaving module which never yields/awaits/etc can deadlock the system, and relying on yields makes for very all-or-nothing concurrency control.
15:28:43
hayley
Alfr: I already did some university work in Erlang. Seemingly I am the only person on the planet who is fine with Erlang syntax (maybe after being exposed to Prolog before).
15:29:53
jackdaniel
scheduler may also use interrupts that respect critical sections, then yield is optional
15:34:20
citizenajb93
nij- on your early question about non local exits: Here's what I would do if I wanted callees to be able to cause a non-locally exit in my function... https://bpa.st/M4BWC (has an example of using #'call-next-method also which isn't non-local, but contains the same idea)
16:37:51
beach
I seem to recall there is a compatibility library for thing that are needed to inspect the call stack, like a backtrace, and frame contents. Maybe by Shinmera?
16:39:38
beach
I was just looking into the CLIM debugger, and it uses SWANK which I am not very happy about.
16:40:39
beach
Maybe if I work on the CLIM debugger, or a replacement of it, that will prompt some updates.
16:41:45
beach
I am in fact working on a portable condition system that is much more complete (in terms of documentation, comments, etc.) than that of phoe, and I just couldn't see myself writing a line-oriented debugger.
18:10:23
beach
Brucio-61: I am about to leave for the day (so I'll read your answer tomorrow). Do you have strong opinions about whether the CLIM debugger uses SWANK or Dissect?
18:17:40
yitzi
If you intended to use SWANK then a portable version such as conium would be better. If it wasn't bit-rotted.
18:33:44
scymtym
beach: if you are asking whether the McCLIM debugger currently uses swank, the answer is yes. regarding the question whether it should use swank or dissect, i'm not sure. one thing to keep in mind is that swank provides things other than analyzing the stack: utilities for conditions, functions for working with restarts and eval in frame
18:34:08
scymtym
swank provides return from frame and restart frame as well but i don't think the McCLIM debugger uses those at the moment
19:20:26
jackdaniel
perhaps people work on their own personal dissects because they don't want to spend time on other people projects ,)
23:02:07
copec
I've been setting up dnsdist loadbalancers at my work, it is mostly written in C++ but with LUA JIT bindings, and the config for it is a LUA source file that ends up creating the live running image, which you can alter in most ways in the live server - add/remove backend servers or front end services or routing mechanisms or blocks.