freenode/lisp - IRC Chatlog
Search
19:22:36
dim
nowadays we have background workers so the main CL kernel could be started on the side, and then triggers could execute in a “private” CL thread, if that's meaningful
19:23:40
dim
even better would be to expose PostgreSQL C internals too so that you could prototype core code and extensions in CL ;-)
21:14:22
s3a
Hello, everyone. :) In Common Lisp, using the time macro ( http://clhs.lisp.se/Body/m_time.htm ), it seems that this higher-order function ( http://dpaste.com/2FJADD5 ) is faster than this non-higher-order (recursive) function ( http://dpaste.com/0XJ5820 ). Why is that? Doesn't the higher-order function also (implicitly) use recursion?
21:40:25
phoe
Common Lisp is iterative and not recursive by default, which might be surprising at first, especially compared to Schemes or Clojure.
21:41:03
phoe
CL is not required to optimize tail calls and some implementations explicitly won't do it, either always, or in some situations like compiling functions with high debug values.
21:41:57
phoe
you can try compiling the second function with (declare (optimize (speed 3) (safety 1) (debug 0))) on a decently optimizing implementation like SBCL and tell me if it becomes any faster.
22:07:39
s3a
phoe, why isn't the non-higher-order function ( http://dpaste.com/0XJ5820 ) not tail recursive?
22:10:12
phoe
https://stackoverflow.com/questions/33923/what-is-tail-recursion seems like a good SO answer
22:11:17
phoe
basically - if function FOO is meant to be tail-recursive, then its return value must be either something that does not involve FOO, or a direct call to FOO with different arguments
22:20:20
s3a
phoe, Okay, so the non-higher-order function will bind stack frames for each element of the list, and the higher-order function will instead deal with its computation in an iterative manner. That's the answer to my question? So, would the non-higher-order one have no optimizations whatsoever? How would the optimized version of the higher-order function look, approximately or exactly?
22:30:59
Bike
sbcl is just what i happen to have open, i would be surprised if clisp was any different
22:32:44
phoe
AFAIK, apply #'+ will pass control to #'+ which accepts a variable amount of arguments anyway, so what Bike said, yes.
22:34:48
phoe
you can substitute (incf acc elt) with something like (setf acc (funcall fun acc elt)) where FUN is your higher order function
22:46:37
phoe
(loop with acc = 0 ...) <- here you have a variable ACC that is initialized to 0 in the beginning.
22:47:24
s3a
yes. it's just that the functional/functional-like paradigm messes with my mind since i'm still new to this
22:47:36
phoe
also, you do not need to (setf acc (incf x acc)) - INCF is a destructive macro, it modifies the place passed to it as its first element.
22:49:27
s3a
what's DSL, though? I'm assuming it's not dick-sucking lips as urban dictionary says, lol.
22:52:58
Bike
s3a: it's nothing to do with functional programming really, the loop increment version is the same idea as you'd do writing it in java
22:56:13
aeth
What seems to work well in CL is lots of tiny, pure functions feeding into larger functions or methods that work on mutable data structures.
22:56:58
s3a
What's wrong with (defun some_function (L) (setf acc 0) (loop for x in L (incf acc x))) ?
22:57:21
emaczen
Alright, I'm trying to create some persistent objects and I wish to create a persistent linked-list. I want to make this a subclass of an already existent linked-list I have defined. This new persistent linked-list would therefore have a metaclass that is "persistent" but would inherit from a class with metaclass "standard-class" -- the error I am getting is that my class is not yet finalized.
22:57:33
aeth
In terms of style, you should use - instead of _ and write using lower case alone (which is then internally upper cased due to historical compatibility)
22:58:20
aeth
and, in fact, loop is complicated enough that you can (I forget how) just put a temporary variable directly inside of it
22:59:11
Bike
so you do (defclass persistent-linked-list (linked-list) ... (:metaclass persistent-class)) and get an error about persistent-linked-list not being finalized?
23:00:43
Bike
i just want to know what, precisely, you do to trigger the error, and what that error precisely is
23:00:44
emaczen
Bike: I wanted to test it out on this class though... maybe I'll try a simpler class first
23:19:09
emaczen
Bike: Sorry I'm having more issues... I'm trying to simplify my examples down but I'm over-simplifying...
23:39:04
s3a
Bike, what exactly is being shown when I do something like (disassemble #'some_function)? I don't even pass parameters to the function that should have parameters, yet the disassemble function knows how to disassemble the passed function. I guess I don't fully understand what disassembly is (in this context, at least).
23:43:55
Bike
i'm not sure i understand why you expect disassembly should need arguments. could you explain what you think disassembly is?
23:44:11
Bike
and no, fewer instructions does not mean more efficiency, it just means the algorithm is shorter to write. same as code
23:45:54
s3a
Bike, I felt that disassembly would require that the function passed to it has its arguments passed to it. so, the disassembly function should have, say, some_function passed to it and some_function should have, say, arg passed to it, and I thought disassembly was going over every single step the algorithm would have taken.
23:47:28
Bike
when you compile a function it is translated into some kind of format that is easier to do quickly. on sbcl it's machine code, on clisp it's virtual machine bytecode.
23:50:56
s3a
but how do we know that that is more efficient? couldn't apply be "beautifully" abstracting away something even less efficient?
23:51:39
s3a
basically, what's a reliable way to compare two algorithms? whether theoretically or using some kind of tool
23:52:36
Bike
we don't know it's more or less efficient, i am thinking heuristically based on my experience as a programmer and compiler developer
23:53:21
Bike
you want to compare "non-higher-order" to "higher-order" but the problem is that you leave too many degrees of freedom
23:54:44
Bike
the iterative implementation will probably be faster on a computer based on sequential execution of instructions, but that's not the only kind of computer there is, you know?
23:55:58
Bike
there are other issues as well, for example i can imagine a reasonable compiler design in which sum-list1 could be compiled more efficiently sometimes
23:57:04
Bike
or, a more reasonable example, with apply #'+, that could hypothetically be compiled by splitting up the problem into a binary tree and adding in parallel (not with standard +, but in general), whereas sum-list1 is rather forced to be sequential and eliminates this possiblity
23:57:30
Bike
usually when you compare algorithms you just compare number of steps, in some sense, but that sense is not defined here, yeah?
0:03:46
s3a
Bike, as a much-simpler approximation, though, the higher-order function uses the + function in an interative manner and so that is faster than recursion?
0:06:27
s3a
i didn't mean it like that. basically, iterative is always better performance-wise, right?
0:09:06
s3a
Bike, ok so whether + uses iteration or a binary tree, in both cases, it'd be faster than the recursive program from the non-higher-order function, right?
0:09:56
Bike
it could be written iteratively but really badly. what i'm saying is that you have not defined this for your question, so i'm going to avoid general statements
0:26:40
s3a
Bike, Hey. I'm back just for a quick question before going again. The apply function converts the list to a series of non-list arguments for the + operator?
0:28:20
Bike
with that kind of question i think it's helpful to just write a basic lisp implementation and see how it works
0:30:39
s3a
so, basically, the compiler just sees the (apply #'+ (list 1 2 3)) and switches it to (+ 1 2 3) with very little overhead?
0:31:07
Bike
in the general case, of (apply foo bar) where neither is known and the compiler can't do something like that, it still has to work
1:17:38
knusbaum
Looks like read-sequence will block until the sequence is filled or EOF. Is there a way to just read a bounded chunk from a stream, returning whatever's available, like POSIX read(2)?
1:36:12
knusbaum
My problem is regarding usockets. I have to be careful not to get into a situation where I'm blocked on a client who's never going to send anything, but I don't see how to do that without checking for available data before each call to read-byte.
1:47:04
knusbaum
Hmm. Looking for something in usockets to do that. Or should I be using something else?
2:37:02
emaczen
What is a file format for writing persitent objects? I was thinking (type :id id :slot-name slot-value :slot-name2 slot-value2 ...)
2:38:03
emaczen
Also, what would a typical access entail w.r.t opening the file reading in the data etc...
3:01:21
diegs_
Anyone used the dbus library? I'm trying to replicate this: https://gist.github.com/therockmandolinist/f2d3fbcde760435fd1ca98c83c335d4d functionality based on https://github.com/death/dbus/blob/master/examples/notify.lisp that example, but rather unsuccessfully
4:09:39
emaczen
Can someone point me to or give me an overview of how object persistence works, maybe even beyond the MOP?
4:35:04
beach
emaczen: I am guessing it is pretty messy in a system (OS + Common Lisp implementation) that was not meant for it in the first place.
4:36:54
beach
emaczen: You may consider looking at this one: https://github.com/robert-strandh/Clobber
4:49:15
White_Flame
emaczen: the biggest issues to deal with are serializing function objects and dealing with multiple references to shared objects
4:49:49
White_Flame
different libraries deal with such things in different ways. The only "standard" way persistence happens is saving your lisp image, which isn't even in the spec
4:54:21
beach
emaczen: If you can describe your needs, that might make it easier to give you advice.