freenode/#lisp - IRC Chatlog
Search
18:06:15
drunk_foxx[m]
Guys, is anyone here familiar with the University of Oslo course in Algorithms for AI and NLP, featuring Common Lisp? It's listed on Cliki, and I wonder is it worth taking, so asking for opinions
18:11:31
makomo
drunk_foxx[m]: i've skimmed the 2 videos where they introduce common lisp but that's all
18:11:55
makomo
it's the basic introduction to CL. i'm not sure how they use CL later on in the course itself
19:29:48
stacksmith
Is there a way to query current ASDF system name, suitable for asdf:system-relative-pathname?
19:31:37
Shinmera
If you mean the system of your own project, well, you better know the name yourself.
19:31:57
phoe
you got the dependencies wrong. your code *knows* what system it is in if it's ASDF-loaded.
19:32:05
Shinmera
If you mean the system that's being operated on during load, there's no way to get it outside of the individual perform calls, if I remember correctly.
19:32:33
phoe
because your ASD file explicitly declares to load that file. therefore there's a 1-to-N mapping between ASD system and a file.
19:32:43
Shinmera
Eh, actually, I suppose you can traverse the component tree to the root to get the system from inner perform calls.
19:33:44
stacksmith
Agreed. However, I just spent 20 minutes tracking down a stupid bug after renaming a system - it loaded fonts from a wrong directory...
19:35:19
stacksmith
So I am just trying to find the path to the system that was loaded, until I figure out a more rigorous way to maintain resources...
19:37:17
Shinmera
For paths relative to your sources I prefer to figure out the directory without ASDF by using something like: (defvar *here* #.(make-pathname :name NIL :type NIL :defaults (or *compile-file-pathname* *load-pathname* (error "Compile or load this file."))))
19:37:27
stacksmith
It seemed that declaring a global to hold the system name is more wrong than querying ASDF, - it obviously has it somewhere.
19:49:15
stacksmith
I think my issue is this: If I want to change the system name, say because I have a few versions I need to keep around (during development), I have to change the .asd file, the contents of the .asd file, and the directory name. Going through source looking for symbols that match the name seems just wrong. Am I being foolish or grossly missing the point?
19:49:19
phoe
I remember that #lispgames had some resources for that, because they use external assets a lot and have to bundle them into an executable when they buildops
19:50:33
phoe
then you change the name of FOO to FRED, and you need to update foo.asd (change system name and file name), and bar.asd and baz.asd (change dependency name)
19:51:52
stacksmith
phoe: I don't even have dependent system - I am just trying to keep foo1 and foo2 around. However I was loading data from a path relative to where 'foo lived.
20:06:59
phoe
and in other files use (asdf:system-relative-pathname :foo.resources "bar/baz/quux.jpg")
20:07:12
stacksmith
I just copied the directory with my old system, giving it a new name. Then, I had to change the asdf file name and contents, to avoid clashing with the old system. I thought that's enough for a quick and dirty way to try something new with the old system around. Then, I realized that there are hardwired references to system name, which I thought could be abstracted by querying ASDF.
20:07:43
stacksmith
I haven't thought about defining a separate system for resources... That would work.
21:23:32
jack_rabbit
Does anyone know what's up with the development of quicklisp? The client doesn't seem to have any commits on github in > 1 year, yet issues are being resolved. Are they keeping the working repo somewhere else?
21:29:10
aeth
jack_rabbit: If you're talking about quicklisp-client, it looks like the resolved issues aren't resolved through patches to quicklisp-client itself. https://github.com/quicklisp/quicklisp-client/issues?q=is%3Aissue+is%3Aclosed
21:32:34
jack_rabbit
I see. I should have looked more closely. I guess I was mostly surprised that there were no commits in the last year, but "if it ain't broke..."
22:44:36
sea
(time (loop for i in l)) and (time (loop for j being the elements of v)), where v and l are a vector and a list, both with the same number of elements (same elements, in fact)
22:46:08
sea
I don't think that's it, because then the results would be backwards. The vector has better locality and branch prediction
22:46:18
aeth
In general, vector operations will be *faster*, especially if the complete type (which includes the length!) is known.
22:49:22
sea
Okay so I changed it to 'across' instead of 'being the elements of'. It's much faster now, but the vector loop is still slower, by a factor of..4
22:50:18
aeth
(defun foo (v) (loop for j being the elements of v do (print j))) (defun bar (v) (loop for j being the elements of v do (print j))) (let ((l (list 1 2 3 4 5))) (time (foo l))) (let ((v (vector 1 2 3 4 5))) (time (bar v)))
22:50:57
aeth
When testing this sort of thing, (1) always define a function and (2) always initialize the outside data structure outside of the time
22:51:24
sea
(let ((v (coerce (iota k) 'vector)) (l (iota k))) (time (loop for i in l)) (time (loop for j across v)))
22:53:45
aeth
okay: (defun foo (l) (loop for i in l do (format nil "~D " i))) (defun bar (v) (loop for j being the elements of v do (format nil "~D " j))) (let ((l (iota 100000))) (time (foo l))) (let ((v (coerce (iota 100000) 'vector))) (time (bar v)))
22:54:18
sea
When I switched to your code, and took out the print, I had to use k = 1,000,000 before the difference showed up. The vector one is half as fast
22:58:14
sea
bar is slower in the profiler. 0.380999 sec/call, compared to 0.354999 sec/call for foo
22:59:57
aeth
The trick might be to find something that generates less garbage but that isn't optimized away
23:01:29
sea
Hrm, ran it in reverse order. The times are much closer, but bar still spends more time in the GC
23:03:59
aeth
This removes garbage... and the vector loses a lot. (I used do just to keep it consistent.)
23:04:02
aeth
(defun foo (l) (let ((sum 0)) (loop for i in l do (incf sum i)) sum)) (defun bar (v) (let ((sum 0)) (loop for j being the elements of v do (incf sum j)) sum)) (let ((l (iota 100000))) (time (foo l))) (let ((v (coerce (iota 100000) 'simple-vector))) (time (bar v)))
23:05:31
aeth
This gets the vector to win: (defun foo (l) (let ((sum 0)) (loop for i in l do (incf sum i)) sum)) (defun bar (v) (declare ((simple-array fixnum (100000)) v)) (let ((sum 0)) (loop for j across v do (incf sum j)) sum)) (let ((l (iota 100000))) (time (foo l))) (let ((v (coerce (iota 100000) '(simple-array fixnum (100000))))) (time (bar v)))
23:08:15
sea
I think so. I took that off. Now I'm running with: (declaim (optimize (debug 0) (speed 3) (space 0)))
23:08:32
aeth
(defun foo (l) (let ((sum 0)) (loop for i in l do (incf sum i)) sum)) (defun bar (v) (declare (optimize (speed 3) (debug 1)) ((simple-array fixnum (*)) v)) (let ((sum 0)) (loop for j across v do (incf sum j)) sum)) (let ((l (iota 100000))) (time (foo l))) (let ((v (coerce (iota 100000) '(simple-array fixnum (*))))) (time (bar v)))
23:08:53
sea
837,554 processor cycles vs 6,534,470 processor cycles and this time, it takes 8x as long!
23:16:25
jack_rabbit
For me, list took 4,695,880 processor cycles, vector took 722,763 processor cycles
23:17:23
jcowan
cdr-coded lists would help in this situation, but not enough overall for anyone to implement them any more
23:22:03
sea
I tried disassemble on both foo and bar but they're exactly the same as far as I can tell
23:22:41
aeth
Well, first make sure that they're not the sb-profile wrapper. You might have to (sb-profile:unprofile) before disassembling now
23:23:24
aeth
Same basic structure of generic-+, but the actual surroundings reflect iterating over their respective types
23:24:58
aeth
My latest bar has this: (declare (optimize (speed 3) (debug 1)) ((simple-array fixnum (*)) v))
23:26:50
aeth
Generic sequence and number code is almost always going to lose to specific sequence and number code in performance. They're basically the only two areas where type declarations are very useful for performance ime.
23:27:04
jack_rabbit
It doesn't matter the data type if the code iterating through it is for generic sequences.
23:27:22
sea
I need to alter the coerce as well. How do I coerce something to be a simple array of fixnums?
23:27:27
aeth
jack_rabbit: but my SBCL still optimizes bar once it knows that it is a simple-array fixnum (*)
23:28:15
aeth
sea: If it can only hold something of one non-T type, it's going to be a different thing than something that holds something of T
23:29:27
aeth
You win twice with an array type like I just gave (three times if a length is given): (1) it knows it's a certain kind of sequence and (2) it can infer what type the items are, which usually cannot be done
23:30:19
aeth
Unfortunately, this only applies to a small number of things. Portably just bit and character. Non-portably, a bunch of other numeric types like (almost always) single-float and (unsigned-byte 8) and fixnum
23:31:54
aeth
An array with an element-type should almost always be the most performant kind of sequence (or data structure in general) in Common Lisp. It will even beat lists at some things that lists are supposed to be better at.
23:32:45
sea
That's how I discovered this in the first place. I was timing an 'optimized' program, and found it got slower
23:34:20
sea
and the thing is that along with the time: 445,976 processor cycles , 4,636,812 processor cycles I get a tonne of time results printed as well, and they all basically look like this. The vector one is much larger
23:36:13
sea
Okay, restarted and re-evaluated what I had in the paste before. 0.148 seconds for bar, and 0.014 seconds for foo
23:39:24
pierpa
arrays with an element-type are not necessarily more performant than arrays with generic element types. It depends on what/when/how much the elements needs unboxing and reboxing.
23:48:02
sea
Why does it do that in one case and not the other? What's the behavior of 'being the elements of' supposed to be, and 'across'?
23:50:34
pierpa
nobody can tell you why "being the elements" is slow since "being the elements" is not CL. It must be an extension of the implementation you are using.
23:51:20
pillton
sea: It is defined here http://www.doc.gold.ac.uk/~mas01cr/papers/ilc2007/sequences-20070301.pdf.
23:52:06
aeth
So it's the sequence-generic version, but unlike most sequence-generic things it doesn't un-generic when the type is known
23:53:24
jack_rabbit
Is there another free CL implementation out there that works well aside from SBCL?
23:56:34
aeth
CCL has a superior GC than SBCL and is fairly comparable to performance in SBCL. ECL apparently is better on some niche areas like bignum performance.
23:57:01
aeth
SBCL, though, in general is pretty nice. It's usually the fastest, the most helpful, and the most feature-rich.
23:57:31
aeth
You could definitely beat SBCL in performance, though, if you really tried. There's definitely lots of room for improvement all over the place.
23:59:10
aeth
SBCL is pretty fast, but its optimizations don't really compare to some of the ridiculous optimizations compilers with big budgets can do these days.
23:59:31
jack_rabbit
pierpa, ccl gave me an error compiling some quicklisp library. I assume that is the library's fault. clisp crashes trying to load swank, which I assume is clisp's fault.
0:00:08
aeth
Ime, libraries will usually work on CCL, often work on ECL, and give issues with just about any other implementation, especially 32-bit ones.
0:01:01
aeth
It's hard to not write for SBCL, though. There are so many ways to figure out what's going on in SBCL.
0:01:16
aeth
I'm pretty sure of how my code behaves in SBCL, at least at the defaut optimization levels.
0:02:28
jack_rabbit
The library is static-vectors, and the error is: "Foreign function not found: X86-LINUX64::|memset|"
0:02:48
aeth
Really? static-vectors works for me in CCL. It gives me issues in ECL, though, even though it's supposed to support it.
0:05:26
aeth
But that does seem to match my experience. Things that use CFFI are the most problematic.
0:10:27
aeth
It's unfortunate that unless CLX works for you there's no way to avoid at least some foreign code.
0:31:57
pillton
White_Flame: I'm not sure what problem static-vectors solves. Do some implementations invoke the GC during foreign function calls?
0:32:29
White_Flame
you can't pass a pointer to foreign code if it could be moved at any time in the future
0:35:11
White_Flame
and in a lot of I/O cases, including graphics, the call does not synchronously encapsulate all access to the buffer you give it
0:43:52
aeth
pillton: Without static-vectors, you're either going to be working with a foreign array through stuff like mem-aref (not a pleasant experience) or you're going to copy from a CL-native vector into a foreign array at some point (which can kill your performance).
0:44:49
aeth
With static vectors, there's no need to do either, as long as you're in control of the allocation and not the foreign library.
0:46:23
aeth
The downside is that you're going to either have to use with-static-vector/with-static-vectors or you'll have to explicitly call free-static-vector in your own unwind-protect at some point.
0:47:15
aeth
I'm guessing you also can't use (declare (dynamic-extent foo)) on a static-vector to stack allocate, so that's another restriction.
0:48:12
aeth
Another downsize is that it seems to fool SBCL's type inference, so I have to (declare (whatever-type foo)) after with-static-vector or a let initializing the static-vector in order to get efficient sequence code, which is unnecessary with a normal vector.
2:20:45
jack_rabbit
Can anyone with CCL execute (read-from-string "#_memset") and let me know what happens?
2:38:14
jack_rabbit
huh. I didn't even need to rebuild. Just used the download from the clozure.com site rather than my distro repo.
3:39:19
aeth
Everything on QL has to run on at least two implementations, so supporting #1 and #2 by popularity is pretty much the absolute minimum.