freenode/#lisp - IRC Chatlog
Search
12:28:01
akater
TMA shka_: I'm aware of basics when it comes to SBCL and ECL, I just happened to encounter someone spreading claims that Lisp is hopelessly dependent on C.
12:28:06
akater
As far as I understand, once you have something like SB-SYS:*LINKAGE-INFO* and corresponding infrastructure, and have your system bootstrapped, there is no need to rely on C at all.
12:32:46
heisig
akater: The SICL project has also made a lot of progress towards bootstrapping Common Lisp from Common Lisp. It does not contain a single line of C code.
13:45:51
jcowan
There are very few language implementations that are completely independent of C, unless (e.g.) they compile I/O operations into system calls rather than libc function calls.
13:47:28
jcowan
Also, C with libgc is rather a nice language, although I've only used it once so far (I was modifying a buggy C program that had its own very limited gc).
13:49:42
jackdaniel
is that person making an argument, that CL is a crappy language because C is a crappy language? :-)
14:02:08
pjb
being dependent on libc is not being dependent on C. libc can be implemented in any programming language. Including CL.
14:37:30
Posterdati
pjb: I have the manual too, seems that it accepts f(x1, x2, ..., xn), but not f(X) with X = [ x1, x2, ..., xn ]
14:53:26
pjb
Posterdati: but in C, the functions to minimize take a vector and a parameter (closure).
15:07:20
jcowan
pfdietz_: By "fat pointer" I was talking about a data structure of some sort containing a reference to a string (or any 1d array), an offset, and a length. Unless this data structure were known to the GC, there's no way it could reclaim inaccessible parts of the string.
15:53:08
pfdietz_
Right. I was mistakenly thinking there was no way to get the displaced-to of a displaced array.
15:53:58
pfdietz_
If there were no such way, they GC could be implemented to prune off the dead parts of an array. But the presence of that function means that's not possible.
16:24:26
p0a
I'm curious about this, is common lisp faring well with concurrency? I recall that's the weak spot, is that right??
16:25:15
p0a
then again not sure why a pthread wrapper wouldn't just do the trick (on the other hand, I have next to zero experience with concurrency)
16:29:48
jackdaniel
standard dictates what implementation must offer, implementations may provide more
16:30:09
jackdaniel
moreover there is a portability layer which unifies interface for basic functionality
16:31:02
jackdaniel
if you are interested in parallellism, you may be interested in checking out lparallel
16:31:55
jackdaniel
bordeaux-threads is a portability layer which maps onto implementation-specific threading (which, in turn, may use pthreads underneath; this is not important in usual case)
16:32:43
jackdaniel
maybe some scheme evangelist said something like: common lisp has a well-defined order of evaluating function arguments, that's why they can't be evaulated in parallel
16:33:09
jackdaniel
because you may evaluated arguments before applying to a function and that's all there is to it. abstracting it with macro is trivial
16:33:34
p0a
I see, so instead of (f g h) you want (f g* h*) with g* and h* the evaluated versions of g and h?
16:33:38
jackdaniel
and defined order of evaluation to a function is an adventage from the intuitivity point of view
16:35:39
jackdaniel
there is a lot of fear, uncerntainity and despair on the internet, rarely founded on a reality
16:50:17
p0a
np I helped you using my google skills, literally knowing nothing about the subject(s). :P
17:00:42
pjb
Using paper is saving trees, since the paper industry must replant trees to continue producing paper. If you stop using paper, then you kill trees!
17:29:30
jmercouris
dlowe: don't bring logic into this discussion, what do you think this is, some computer science channel?
17:30:18
jmercouris
just in case it is unclear, I am making a joke, and with that, signing off, goodbye everyone, and thanks for all the fish!
19:21:13
p_l
the better keywords for the search would be "CSP" aka "Communicating Sequential Processes"
19:32:51
jasom
I like lparallel, but I have learned that you should use separate kernels for separate parts of your processing pipeline. I actually independently discovered that, but recently saw a blog article to that effect as well.
19:37:14
jasom
this issue would be solvable with a solid green-threads implementation (where any blocking would free up a worker thread), but N to M green threads are non-trivial.
19:38:34
jasom
And green threads at all in lisp require implementation help (green threads can be implemented as a library on top of a language with continuations, but even then getting I/O correct is a lot of work)
19:42:13
pillton
jasom: I think a better solution is a data flow framework with its own scheduler. As mentioned in the article above, you tag each "data flow processor" with the kind of work it does and configure the scheduler to use a certain number of threads for that type of task.
19:43:24
jasom
pillton: that's roughly what separate lparallel kernels would accomplish, right? i.e. if I were to implement a data flow framework with lparallel, each "kind of work" would be a kernel with a certain number of threads?
19:43:55
jasom
alternatively one could just ditch lparallel's kernels and just use the queue and then manage threads manually with BT
19:46:45
pillton
jasom: I have only looked at lparallel briefly. I think the queue's in lparallel assume one thread writes and another thread reads and blocks if data isn't available. You don't want this behaviour in data flow processing as it should be possible to execute a "flow"/graph with only one thread available.
19:47:46
jasom
bounded queues block when the queue is full (which is what you want for producer-consumer in a multithreaded environment)
19:48:44
jasom
pillton: the whole point of PVK's article is that having separate thread pools for steps in the flow graph guarantees the consumer will always have at least 1 thread available for forward progress, right?
19:51:53
pillton
Right. I wasn't sure from the article if progress would occur with only one thread.
19:53:45
pillton
Well, it seems to contradict that purpose of these libraries i.e. you want to take advantage of parallelisation, not require it.
19:55:03
jasom
If you wanted to sugar in allowing a step to have a thread-count of 0, and replacing channel (or queue) calls with a straightforward function call, you can certainly do that in the framework.
19:55:28
jasom
but as long as blocking exists, you can't just ignore the number of threads involved.
19:57:08
jasom
which requires a lot of manual callback type things, or a real green-threads library that handles I/O.
19:58:19
jasom
pillton: well basic-binary-ipc essentially requires you to structure your code in a callback type manner. You get to write the event loop yourself though, which gives you some flexibility.
19:58:58
jasom
what do you do if you receive a partial message with b-b-i? Save it somewhere and wait for more...
20:01:11
jasom
also polling is notoriously inefficient, to the point that creating a single thread just for b-b-i might be easier.
20:02:06
pillton
Well, my view at the moment is that if there are no other items of work then you can block.
20:04:21
jasom
In the past I've done that by having a thread for all sockets that just sends to a queue when there is data. Not strictly single-threaded, but all non-socket code is single threaded then.
20:04:38
jasom
if that was unclear, it's 1 thread total for all sockets, not 1 thread for each socket.
20:05:57
jasom
right. The main-thread only waits on queues. The socket thread only waits on sockets and passes the data into queues.
20:10:05
Xach
and that's to have a pipe fd, and write a byte to it to wake up the io multiplexer when the queue (or other thread-related thing) is ready for action.
20:12:51
jasom
Xach: I've seen things like that done before (works with socketpair() as well on windows).
20:13:19
jasom
though now that I think about it, if it's completely single-threaded new data can *only* come from I/O not queues.
20:15:02
jasom
but if you're not explicitly yielding to the scheduler, you'll never check for new data from sockets while there is still work to be done, no?
20:15:35
jasom
and if you are explicitly yielding to the scheduler, no need to block if there is still work to be done.
20:15:59
pillton
You send a message to yourself and let the scheduler take care of when you are executed next.
20:16:31
jasom
pillton: and that's where callbacks need to come in, right? How else to you tell the scheduler what to run when you are next executed?
20:17:13
pillton
It isn't a callback. You send data to one of your input ports telling you to "process" that data.
20:17:30
pillton
I hate the word callback and all it encompasses. It is the one thing I want to avoid.
20:18:16
jasom
so the entire program is structured as a giant event loop, where input ports are mapped directly to single functions that will never block?
20:20:22
jasom
so one input port will map to "try to read from a socket" and if there is insufficient data, then you tell the scheduler to try again in the next loop?
20:21:33
jasom
and if there is no other work to be done the scheduler will block on any data from a socket.
20:22:47
jasom
so that unit needs to be intrusive into the scheduler, at least to the point of querying if we are currently idle.
20:27:04
pillton
Yes. It isn't clear if this is a good idea when you have N threads though. I need to check that.
20:27:34
jasom
It would seem to me that sockets would need at least 3 slots: the actual socket object, a buffer (for partial messages) and which input channel to message when new data arrives.
20:27:49
pillton
Anyway, I have to go. My apologies for joining the conversation and running off. The kids are making a lot of noise about breakfast.
20:27:55
jasom
Then you can have a socket pass through a state machine by changing the input channel on state transitions
21:04:46
no-defun-allowed
are there any portable unexec implementations around? i heard several lisp systems require some kind of unexec to write binaries
21:10:47
akater
In article “The Anatomy of a Loop” www.ccs.neu.edu/home/shivers/papers/loop.pdf author says “another issue with [SERIES] iterations is that they don't nest”. Does anyone understand what it means?
21:12:31
akater
I wish SERIES had something along the lines of #2Z((1 2 3 ...) (a b c ...)), maybe this was the point. But maybe I'm missing something else.
21:19:28
LdBeth
<no-defun-allowed "are there any portable unexec im"> #'no-defun-allowed: I don't think so, because it's GNU C lib specific
21:22:42
no-defun-allowed
are there any implementations of unexec that i can use for a new lisp system?
21:23:42
jasom
no-defun-allowed: IIRC the emacs unexec assumes malloc() will be used for allocating memory; most lisp implementations that dump images manage the heap themselves.
21:24:05
p_l
though arguably CMUCL/SBCL image format is ancestor of the format used by executables/libraries in Mach (including OSX)
21:24:15
no-defun-allowed
i'll probably use malloc to allocate a heap then carve that out cause i'm bad at c
21:24:21
jasom
I believe that ecl uses malloc, but it also doesn't dump images, though it's possible that unexec would allow it too (at least in single-threaded programs).
21:25:20
p_l
no-defun-allowed: SBCL-style use of mmap() is easier than grokking what actually happens with malloc()
21:25:54
jasom
a non-generational two-space collector is *far* easier than all but the dumbest malloc implementations.
21:27:31
jasom
which reminds me of a question; does the ARM64 sbcl use the non-conservative (split-stack) collector, seeing as ARM64 has 31 GPRs?
21:29:42
jasom
x86-64 uses the conservative GC and has 1 more GPR than ARM32 (since they both have nominally 16, but ARM includes the PC in that and AMD64 does not).
21:30:59
jasom
no-defun-allowed: 6 once you account for the extra stack and frame pointers. I suppose that's still more than x86.
21:32:14
no-defun-allowed
hmm, screw that idea of dumping an image then, is there a way to make libjit dump its code? (is there a #libjit?)
21:32:21
jasom
Benchmarking C code on POWER with reserving registers for globals, I found the performance drops rapidly until 18ish and then tapers off to the point where 24 is not measurably better than 32.
21:35:26
LdBeth
#'no-defun-allowed: if you are still curious on how Emacs works with unexec on mac, they just implemented unexec function in unexmacos.c with Darwin's api. And this is obviously not portable.
21:44:10
jasom
yeah, libjit looks like a poor match; it has nested functions but they are not allowed to outlive their parents.
22:26:07
jasom
Xach: if F1 has nested function F2, only F1 and children of F1 may call F2. It looks like libjit uses the C stack for all locals.
0:12:30
jasom
Josh_2: adding :verbose t to either the asdf operation or quicklisp may show more information.
0:15:28
Josh_2
What's the best way to manage global variables that are depended upon by other files? put them all in one file?
0:16:57
jasom
Josh_2: all in one file is a not-great, not-terrible solution. It is also valid to have a defvar for the same global in all files. Lastly, if there is a clear place that implements the primary functionality that the variable is used by, put it there, and make all consumers depend upon that file.
0:18:30
jasom
ACTION assumes the value is not needed at compile time; that would change the equation.
0:19:24
Josh_2
uhm well I have a variable called *directory* which is the directory which both my javascript generator uses and hunchentoot uses to create and start the server
0:20:33
jasom
hmm; is it expected to be logically constant? If so, I would put it in a separate file that has all such site-configuration options and use defparameter.
0:21:28
jasom
oh, then it's fine. If it's being set at launch, you can just have defvars without bindings wherever it is used, and then set it at launch.