freenode/#lisp - IRC Chatlog
Search
22:23:16
Shinmera
The great thing about standards is that the only good ones are the ones you choose
22:27:37
Shinmera
According to https://developer.twitter.com/en/docs/tweets/post-and-engage/api-reference/post-statuses-update "Whether or not to put a pin on the exact coordinates a Tweet has been sent from."
22:33:45
Mqrius
Let's make a true final universal lisp style though. I mean, lisp is great and all, but there's so many brackets! If we just move them away a little, everything will look much cleaner, like python! Take for example this fibonnaci function. Isn't it beautiful? https://pastebin.com/5YkNS2Da
22:37:17
Mqrius
Hmmm yes, and the style should probably be enforced rigorously. I'm thinking the compiler deletes your file if it's wrong, and tells you to do it over.
22:38:22
jasom
we should obvioulsy abandon s-expressions in favor of the more flexible yaml notation for lists
23:54:40
stacksmith
Great. I am having trouble figuring out the standard about arguments that are not symbols.
23:55:24
Bike
arguments can be anything, of course. in (+ 2 3) the arguments are numbers. do you mean parameter names?
23:58:28
stacksmith
By parameter I mean lambda-list description in the definition. So if I have (blah &key q &allow-other-keys), there is what I refer to as a keyword parameter :q. However, when I later call this function, the argument q may be actual :q (blah :q 1) or something like (blah (some-function-that-returns :q) 1)
23:59:32
stacksmith
So I was getting SBCL warnings about non-symbol arguments weakening checking... But now I can't. So I tried to figure out what the standard says, but got even more confused.
0:00:03
White_Flame
because of APPLY, keyword values need to be able to be runtime scanned from a generic argument list
0:00:12
Bike
indeed you can have the keys be returned from calls or whatever. it's just very unusual to do so, so sbcl complains a little.
0:06:25
White_Flame
oh, I wasn't referring to a specific part of the spec, just what it would take to APPLY a function that took keyword args
0:07:25
White_Flame
if you use DISASSEMBLE in SBCL, you'll often see something like "non-keyword parsing entry"
0:07:56
White_Flame
so there's another path that walks through the arguments one by one at runtime, seeing where in the keyword list things match up
0:08:41
White_Flame
so that needs to be resilient to dynamically passed argument lists (like APPLY) as well as computed keywords, by necessity
0:10:13
stacksmith
That is entirely sensible. If I could only figure out how I got this warning before, I'd feel a whole lot better.
0:12:49
Bike
compile-filing "(defun foo (&key a) a) (defun bar (b) b) (defun baz (x) (foo (bar :a) x)) does it fo rme.
0:17:31
Bike
"The first argument (in keyword position) is not a constant, weakening keyword argument checking."
0:22:08
jmercouris
Shinmera: In case you are interested, changing it to define-command fixed the indentation somehow magically :D
0:22:30
jmercouris
Shinmera: Seems like indentation on something that doesn't meet those prefixes that you listed results in improper indentation
0:36:40
stacksmith
_death: by comment, do you mean a blog entry somewhere? I am morbidly curious about things that seem to be useful, but usually are a bad idea, like doubly-linked lists...
0:39:29
stacksmith
I've had at least three situations where I thought I needed a doubly-linked list, but after writing gobs of code I always trashed it and went back to conses...
0:42:28
pjb
In those languages, it's customary to use intrusive linked lists, because of typing. (to avoid casting).
0:43:48
stacksmith
Yeah, intrusive lists are another problematic thing. It often seems like a good idea - the objects then have direct access to the remaining list, but it inevitably leads to reinventing the wheel as a good chunk of CL becomes unusable...
0:51:43
jasom
pjb: there are other advantages of intrusive lists besides avoiding casting; C++ can do non-intrusive lists via templating but people still sometimes choose to use intrusive lists
1:00:28
stacksmith
In my case, I wound up with a shitton of code in every case. Linkage maintenance is problematic. Many tricks that work with lists don't work in both directions. And mostly, all the list operations no longer work.
1:04:19
jasom
rme: the generic term is "intrusive data structure" or "intrusive container" it's where your container and data are implemented in the same structure. So for e.g. a list of two dimensional coordinats, instead of having a list containing structures with X and Y fields, you'd have a structure with X, Y, and Next fields.
1:06:08
jasom
The primary advantage comes from the fact that a new element needs only a single allocation, and that the container data and payload data have spatial locality.
1:06:57
pfdietz
This reminds of utility fields that get used in algorithms. For example, a mark bit used by graph traversal algorithms.
1:08:03
jasom
pfdietz: right it's intrusive in the sense that it breaks abstractions, so you lose the benefit of the abstraction but also lose the cost.
1:08:40
pfdietz
I've wanted cl-containers to support a WITH-MAPPING macro, that allows an algorithm to use a utility field in the container's objects. If someone else wants to use that field the WITH-MAPPING macro would just hand them the slower non-intrusive map instead, transparently.
1:09:02
jasom
_death: well if you wanted points that could belong to two containers, you can have two next pointers! (Yes I've seen this in kernel code).
1:12:16
jasom
modern architectures can encourage this a lot both because indirection can be expensive *and* the only thing that matters for memory performance is the number of cache lines you access, not the number of bytes.
1:15:50
jasom
conversely I've seen a structure that was the size of a cache line and held a single word value only (plus padding). This was so that an array of them could be efficiently accessed by multiple CPUs with unshared L1 caches.
1:36:16
rme
I think it was in the context of a (concurrent) ring buffer, where I wanted the read and write pointers to be in different cache lines.
1:43:22
rme
I was trying to see if it was possible to keep up with reading packets from a 10 gig ethernet interface, and I didn't want consumer processes to have to contend for the cache line containing the write pointer, which was being continually bumped by the producer.
1:45:57
rumbler3_
how were you able to specify that different processor's cache lines would have specific data
1:49:54
_death
you make sure each item is properly aligned and of a cache line's size, and have each processor access a different one?
1:55:09
emaczen
what would be faster writing to a vector and then writing to a file, or writing to a string-output-stream and then writing the string-output-stream to a file?
2:34:10
pjb
rme: I/O is when you access secondary storage, not when you write in core memory whatever the API you use to write this core memory.
2:57:15
stacksmith
Bike: White_Flame: re: non-symbol keyword arguments... What do you think of 3.5.1.5 Invalid Keyword Arguments: "It is not permitted to supply a keyword argument to a function using a name that is not a symbol."?
3:00:41
Bike
the function is a function. it doesn't know or care about how what it's been passed was evaluated
3:04:29
pjb
stacksmith: (flet ((foo (&rest r &key) r)) (let ((arguments '(1 2 3 4))) (apply (function foo) arguments))) #| ERROR: Incorrect keyword arguments in (1 2 3 4) . |#
3:05:33
pjb
stacksmith: (flet ((foo (&rest r &key &allow-other-keys) r)) (let ((arguments '(:a 2 foo 4 #:bar 3))) (apply (function foo) arguments))) #| --> (:a 2 foo 4 #:bar 3) |# this is conforming.
3:05:54
pjb
stacksmith: (flet ((foo (&rest r &key &allow-other-keys) r)) (let ((arguments '(1 2 3 4))) (apply (function foo) arguments))) #| --> (1 2 3 4) |# this is NOT conforming! it happens to work in ccl, but it could fail.
3:07:45
pjb
I still think it would be fun and worth the work to implement a strictly conforming CL implementation that implements things in the most unexpected way…
3:12:10
pjb
But remember, you can always just write &rest r, and then parse r in your function as you want.
3:15:58
pjb
for example, you could write: (flet ((foo (&rest r) (destructuring-bind (&rest rr &key a b &allow-other-keys) (loop while (and (cdr r) (symbolp (car r))) collect (pop r) into ks collect (pop r) into ks finally (return ks)) (list rr r)))) (let ((arguments '(:a 0 foo 33 1 2 3))) (apply (function foo) arguments))) #| --> ((:a 0 foo 33) (1 2 3)) |# and it's perfectly conforming.
3:16:07
stacksmith
pjb: Are you sure it's not conforming? I seem to remember somewhere in the spec saying that &rest gets all arguments, and &key picks from that... So a null &key list may not really violate the spec.
3:16:44
borei
continue to work on matrix multiplication optimization, trying to use build-in options - like type declaration and optimiztion
3:18:11
pjb
borei: you can instead write: (unless (= …) (error …)) (rest-of-the-body) so you don't accumulate the sexp levels.
3:18:20
stacksmith
pjb: but, are these keyword arguments? I could make an argument that they are &rest arguments, and &key and &allow-other-keys does nothing here.
3:19:11
borei
in https://pastebin.com/bYpSRBP4 i need some help with understanding starting from line 6
3:19:33
pjb
stacksmith: that's the point, when you have both &rest and &key then the structure of the rest must be that of key value key value…
3:20:36
aeth
It looks like in the disassembly of the allocation of a 2D array in SBCL, there are four allocations, instead of 2 for the equivalent 1D array. I think that might be because initial-contents is only optimized if flat.
3:20:40
pjb
borei: it would be better to ask in #sbcl, but basically it says that it doesn't know what type the slots in elements are.
3:21:18
jack_rabbit
aeth, That's surprising. From what I thought, the 2D arrays were backed by a 1D physycal array.
3:21:30
pjb
borei: I would assume that the compiler is able to compile (aref 2darray i j) with way faster code than (aref 1darray (+ j (* i row))).
3:21:37
Bike
on sbcl, a multidimensional array is a "header" sort of structure with a boxed single dimensional array
3:21:50
aeth
There should be no problems with 2D arrays in SBCL after the arrays are allocated. They're just 1D, and the compiler should be able to handle aref in an optimized way if the type is known (not knowing the full type, especially the dimensions, is probably going to be very expensive)
3:22:17
jack_rabbit
iqubic, I know, That's how I knew the answer so quickly. I needed it once and it doesn't exist.
3:22:54
aeth
But if you're concerned about memory, you should not only use 1D arrays, but use ranges within that 1D array, and allocate a bunch of matrices at once.
3:24:11
stacksmith
pjb: It actually says that &rest must be followed by a lambda-list-keyword, which leaves &key, &env and &aux...
3:24:29
pjb
iqubic: all the computable function already exist in CL. You only have to find their name. (lambda (lol) (apply (function mapcar) (function list) lol)) is the name of the function you want.
3:24:59
borei
pjb: i didn't get that - "but basically it says that it doesn't know what type the slots in elements are"
3:28:07
pjb
Also, you see, this is a problem of using vectors instead of 2d arrays: now you have to tell the compiler that your index computation returns an index…
3:29:15
aeth
pjb: SBCL doesn't trust the (except maybe at safety 0), it has an internal truly-the that you should never use.
3:29:33
pjb
(since you use it to index an array, it should be a fixnum, so it probably is one, so it should be safe enough to use THE: if the indices or matrix sizes were too big, you'd have problems allocating them).
3:32:45
pierpa
borei: a more basic question is why you believe that your handrolled matrix will be faster than system supplied 2D arrays.
3:33:31
aeth
pierpa: It's possible that some 2D arrays in some implementations aren't very efficient because they're not very common.
3:34:17
pierpa
aeth: in theory there can be implementations that puposefully slow down acces to 2D arrays, yes. :)
3:36:02
pierpa
I doubt that in such an implementtion a naive user implementation will be faster, anyway
3:36:09
aeth
You don't have to purposefully write slow code, usually it's the other way around and slow code happens when you're not paying enough attention.
3:37:02
borei
im solving one problem after another, i know that constant parts of the indexes can be moved out of the loop
3:37:10
aeth
If you're doing linear algebra in pure CL, you probably want to use SBCL, though. (At least, that's the case in 2018.)
3:38:05
stacksmith
borel: keep in mind that specifying types in SBCL generates much smaller code with optimization on, but generally bigger code with it off as it treats type declarations as requests for extra checks. And of course, thinking about optimization before you need to is a sin.
3:39:04
aeth
stacksmith: But it's still generally a win for arrays and numbers because with that type check, it at least knows to use efficient, inline code instead of not-inline, type-generic code.
3:39:35
aeth
And with arrays there's the added benefit that if you have the full type information (including the dimensions) it can get rid of bounds checks
3:41:29
aeth
I think the only time you need to worry about types is with arrays, sequences, and numbers.
3:41:34
stacksmith
aeth: can't argue with that. If it matters, of course. I find more often than not that it doesn't matter because I wind up rewriting things that I care about a few times to get the algos right.
3:42:39
pierpa
for a matrix multiplication it matters, and there's not much room for algorithmic wiggles, unless he wants to go real fancy
3:43:07
aeth
It probably really depends on what you're doing. If you're working with arrays of floats, probably the most you'd have to change later (other than dimensions) is s/single-float/double-float/ or s/double-float/single-float/
3:43:52
aeth
(Although if you're really writing efficient double-float code you do sometimes have to write some hacky workarounds to avoid consing.)
3:45:57
aeth
(let ((foo (the fixnum (foobar 42)))) (+ foo 42)) is very similar (if not identical) to (let ((foo (foobar 42))) (declare (fixnum foo)) (+ foo 42)) and in either case, tells the compiler that it can assume foo is a fixnum (it can also completely ignore it, or type-check it)
3:46:25
aeth
The advantage of the is that you don't need the variable, so you can just say (+ (the fixnum (foobar 42)) 42)
3:46:50
aeth
The disadvantage is you probably want check-type instead of the or declare. It will more portably do what you probably want.
3:47:03
stacksmith
borel: Are you talking about constraining the result? Just return (the fixnum whatever). And for indices, use (the (integer 0 99) ...) or whatever the array size is...
3:47:59
stacksmith
When adding or multiplying, use (unsigned-byte ...) or (integer ..) of a bitsize that fits into a fixnum after the operation.
3:49:09
aeth
Of course, "undefined" (with the or type declarations) doesn't mean "ignore it, type check it, or assume it without checking". That just happens to be what you will probably encounter. It could also sell all of your Bitcoin or tweet it to the world if the type doesn't match. Or literally anything else that a computer can do.
3:49:31
stacksmith
Also declaring the array itself as a vector or simple-array generates much tighter code with sbcl.
3:50:45
borei
from what i read make-array will create simple-array unless you specifed :fill-pointer and/or one more parameter
3:51:21
stacksmith
borel: true enough, but I think you still need to declare it in functions that use the array.
3:53:18
aeth
You should probably use specialization-store for this. https://github.com/markcox80/specialization-store/
3:53:35
aeth
The alternative is making n versions of the exact same thing, just with different types.
3:54:34
stacksmith
Also, if you are really optimizing, you may be able to arrange the loop better using tagbody - sbcl loops sometimes jump around weirdly...
3:54:41
aeth
Or you could just inline something and hope the type information is available where it is being used. But some things produce really massive functions when disassembled, e.g. many matrix operations. So those might not be a good choice for inlining.
3:55:32
aeth
stacksmith: I'm not sure that there are many cases to use tagbody directly instead of do (do even has an implicit tagbody for when you really need it, but good luck getting that to auto-indent correctly because few people use that feature)
3:57:01
aeth
An alternative to using loops for arithmetics is to use macros to essentially loop unroll. I'm not sure how many iterations it would take before this is a bad idea. It definitely is ideal for small numbers of iterations like 2, 3, 4.
3:59:09
aeth
Oh, btw, on matrix multiplication in general: https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm
3:59:34
aeth
"Unsolved problem in computer science: What is the fastest algorithm for matrix multiplication?"
4:00:26
aeth
stacksmith: The majority of matrix libraries in CL are graphics libraries, so 4x4 matrices (and possibly smaller ones)
4:02:08
aeth
Are there many native math libraries that aren't CL graphics math libraries? Most, I think, are wrappers.
4:02:18
borei
in quantum mech - there is only matricies and vectors, and they are with complex elements
4:02:37
aeth
The Maxima CAS is interesting. It has Fortran, and it has f2cl. So I think it might compile all of its Fortran into CL!
4:04:35
aeth
In SBCL: (upgraded-array-element-type '(complex single-float)) => (complex single-float)
4:05:22
aeth
Now I think the next question is if they can be worked with in a non-consing way, just like arrays of double-floats.
4:06:52
beach
rme: Bordeaux is a great place to live, but I don't think you should come here counting on the students.
4:07:49
aeth
And it looks like it's not non-consing in SBCL: (defun foo (v) (check-type v (simple-array (complex double-float) (3))) (incf (aref v 0)) v) (disassemble #'foo)
4:09:12
aeth
By comparison: non-consing in SBCL: (defun foobar (v) (check-type v (simple-array double-float (3))) (incf (aref v 0)) v) (disassemble #'foobar)
4:09:42
stacksmith
borel: I find that when using type declarations and the... turning optimization off, sometimes generates helpful messages about dubious type declarations.
4:10:23
pjb
borei: look how slower your solution is (even with the constant subexpressions out of the loops) than indexing 2D arrays, in ccl: https://codeshare.io/2KwP3z
4:11:30
iqubic
Is there a way to take a dynamically bound string, and set it to the value of my chosing?
4:13:33
pjb
iqubic: strings may be mutable or immutable. When mutable, they may be adjustable or non adjustable. If they're not adjustabnle, then you can mutate them but not their length; if they're adjustable, you can mutate them and mutate their length. Whether the string is bound, and whether it's bound lexically od dynamically are irrelevant.
4:14:31
pjb
iqubic: notice that in general, one avoids to mutate strings, since all the difficulty is knowing whether it's an immutable or a mutable string!
4:16:20
pjb
(let ((foo (make-string 3 :initial-element #\a))) (print foo) (replace foo "bar") foo) #| "aaa" --> "bar" |#
4:17:18
pjb
borei: try it with small n and big n (eg. n=2, n=3, n=4 and n=20 n=30 n=40 and n=200, n=300, n=400).
4:17:24
aeth
borei: your matrix isn't going to be anywhere near as efficient as built-in 2D arrays because you're using CLOS
4:17:47
aeth
borei: I don't see how a hand-made matrix in CLOS is going to be more efficient than built-in matrices unless you're using a JITed CL
4:18:53
pjb
ebzzry: using a library that gives you access to x11 functions to manipulate the x11 clipboard. Hence #x11
4:22:15
pjb
stacksmith: by reading a literal indeed: '("abc" #.(vector 1 2 3) #.(make-array '(2 2) :initial-contents '((1 2) (3 4))))
4:28:15
stacksmith
pjb: re immutable arrays: What you describe seem to be just literals. SBCL will actually allow modification with a warning - not that it's a good idea. Is there something in the standard?
4:28:45
aeth
borei: (This is just my opinion.) With CL, no need to have any numbers at all. Write two equivalent ways of doing something. Then use disassemble. If the disassemblies are identical (as they often are) there is no difference. If the disassemblies are very similar, you might be able to notice the differences. Only if they're very different do you need to benchmark.
4:31:46
aeth
I would not be surprised at all if someone has already written a disassemble diff tool for this.
4:33:31
rme
I'll be in town starting on the 23rd. It'd be nice to see you if you're available sometime.
4:33:45
beach
You need to act quickly. Prices of real estate are rising fast because of the increased popularity of the city in recent years.
4:33:48
k-hos
I really do like sbcls disassmble feature, aeth, I used it the other day to confirm that macros don't have any overhead (at least simple ones)
4:34:26
aeth
k-hos: My favorite part about SBCL's disassembly is that it comments the potentially interesting things, such as constants and allocations and function calls.
4:36:04
rme
beach: From Feb. 23 through March 6th. Not sure where I'll stay; some airbnb place (or places). I'll be looking around for a place to rent for a year to start with.
4:37:21
aeth
(And I use disassembly not to write low-level code but to see how high level the code can get away with being.)
4:37:47
beach
rme: I *might* be busy until end of February. A visitor might be here. After that, I'll be available.
4:40:45
rme
Nothing beats coming to look around in person, I am thinking. Hopefully, I won't be totally priced out.
4:51:59
aeth
Bike: Should I move my length 16 array to a (4 4) array for my 4x4 matrices? You seem to know more about this than anyone else in the previous conversation.
4:52:26
aeth
It does look like SBCL can declare them dynamic extent, so they do seem to be useful enough for my uses.
4:53:48
aeth
It looks like dynamic extent still works with (list (list ...) (list ...) (list ...) (list ...)) for the initial contents (which I assume an inline make-matrix-4x4 would produce)
5:11:08
aeth
In the length 16 version it's all MOVSS, MULSS, and ADDSS. In the dimension (4 4) version, it has a lot of MOV instructions added
5:12:11
beach
borei: I suggest you introduce a variable using LET that you sum into. That way, you can declare the type of it.
5:12:47
beach
borei: It is complaining that when you use the SUM LOOP keyword, the type of the implicit variable it sums into is not known to be DOUBLE-FLOAT.
5:14:50
aeth
Bike: It's probably slightly slower, but it's really hard to tell because my computer is too fast and any loop will optimize it away
5:14:53
beach
borei: Well, you have the LOOP keyword DO at the end of the line instead of at the beginning.
5:19:24
beach
borei: You can "play with" the variables first, but you should work on the code layout before resubmitting.
5:39:26
borei
single-threaded application on the consumer-grade CPU with performance a bit less then 1Gflop on double precision numbers
5:41:28
aeth
Lisp isn't slow, but it doesn't have as many optimizations as a more popular AOT compiled language. Mainly becaue it doesn't have as many humans optimizing it as a more popular AOT compiled language.
5:41:46
aeth
But I suspect a lot of those optimizations are pointless when you're just doing linear algebra.
5:45:29
borei
i got additional and huge motivation for big dive into LISP ecosystem, huge thanks for support !
5:46:45
fiddlerwoaroof_
borei: if you're doing matrixy-stuff and have an Nvidia GPU, you might look into Gabor Melis's libraries, like mgl-mat
5:47:21
fiddlerwoaroof_
I've never got them to work because I don't have the hardware, but they look really nice for machine-learning applications
5:47:25
aeth
k-hos: Well, I think Lisp games would benefit the most from a concurrent, parallel, real-time garbage collector.
5:49:16
borei
fiddlerwoaroof_: actually i have question about ML, can you recommend any materials about ML+Lisp to read (entry level)
5:49:16
aeth
fiddlerwoaroof_: The most important part in the GC buzzwords for games is real time, afaik.
5:49:39
fiddlerwoaroof_
aeth: The built-in java GCs are really advanced, because of the JVM's position in the industry
5:49:48
aeth
Games are usually on some strict schedule (or two, if the rendering framerate is separate from the logic one). e.g. 0.01 second ticks or something.
5:51:52
fiddlerwoaroof_
borei: CL isn't the best yet for modern ML, just because we don't have the libraries that python/scala have. However, Gabor Melis managed to do pretty well using his own libraries
5:57:59
aeth
k-hos: Most games wind up using a garbage collected scripting language in addition to a non-garbage collected engine language.
5:59:33
fiddlerwoaroof_
Hmm, it looks like redhat has recently opensourced a low-latency GC: https://wiki.openjdk.java.net/display/shenandoah/Main
6:00:09
k-hos
personally I would prefer no gc and have some kind of ref counting mechanism built into the language
6:03:54
k-hos
it's pauses and inconsistency that make gcs generally bad for games, but there is little point to arguing about a theoretical language anyway
6:04:18
beach
k-hos: There are very good real-time, parallel, concurrent garbage collectors these days.
6:11:09
aeth
jackdaniel: Yes, because I think there are literally dozens of people on IRC who want one.
6:26:42
aeth
It does look like the (16) array is consistently around 2.8 kilocycles compared to the (4 4) being consistently around 3.8 kilocycles. So that is quite a drop in performance, even though I have to use SBCL's cycles to even notice it at all (0.000 seconds)
6:27:20
aeth
I've played with different ways of expressing it, e.g. pulling out all of the array accesses into variables first
6:29:00
aeth
Maybe the next thing I could try is using a loop instead of doing all of the adds/multiplies,
6:38:35
aeth
It looks like the 4x4 matrix is always slower than the fake 4x4 matrix because SBCL can't access items from it as well. It generates an extra MOV for every one or two MOVSS, which adds up a lot in matrix multiplication. Both on retrieving the values and on setting the destination matrix.