freenode/#lisp - IRC Chatlog
Search
14:58:55
Xach
sigjuice: to install it all, you can use ql-dist::(map nil 'ensure-installed (provided-releases (dist "quicklisp")))
15:01:40
makomo
Xach: what kind of syntax is that? does it evaluate the form as if it was in the package ql-dist?
15:10:07
Xach
allegroserve uses foo:: a few places and it really puts a crimp in porting to clozure cl.
15:10:48
Xach
mfiano: fwiw http://report.quicklisp.org/2018-01-02/failure-report/gamebox-grids.html#gamebox-grids
15:13:02
mfiano
Xach: Please remove that library from Quicklisp. It has no users and the math is wrong on closer inspection.
15:43:23
phoe
Shinmera: no, COERCE'ing arrays from (unsigned-byte 8) to T and then from T to (unsigned-byte 8).
15:43:52
phoe
So working on 600kB worth of tiny archives consed up about half a gigabyte of memory in total.
15:44:16
phoe
It's easy to achieve 40000% speedups, you just need to fuck up *really* bad in the first place.
15:45:02
phoe
Or, to be more specific, coercing raw CFFI memory into simple-vector into (vector (unsigned-byte 8)).
15:45:11
Shinmera
Reminds me of the stories of people intentionally making products slower so that they had an insurance on future releases being faster.
15:46:26
Shinmera
Other ways to provide job (and product) insurance: write very obscure code that only you can read, or add in details that nobody but you will notice
15:48:32
phoe
Unrelated: I just found a way to run threaded ECL on my Android phone, via termux. I'll blogpost about it later today.
16:19:16
whoman
as for "stacks" well it doesnt matter. CSS and html5 are nonprogramming ways to make a lot of things. there are so many teams and individuals and groups and standards and companies and gov'ts and so on that is behind "web tech".
16:19:52
whoman
i dont think one dude has enough capacity or perspective or time or energy for anything made by many people. crowd wins
16:24:27
beach
I recall a paper by Xof, explaining good and bad ways of using reader conditionals. Does anyone remember where to find it?
16:25:28
whoman
web tech is not an "invention" - its layers and layers of progressive changes and things from the whole world
16:25:51
whoman
like public transpotation. we can blame cars or roads or whatever but yeah we still got to get around
16:27:07
LexLeoGryfon
casual symposyim of lisp programmers https://www.pornhub.com/view_video.php?viewkey=1221903074
16:34:15
Xach
the person is trying to disrupt the global productivity of lispers. let us not help them succeed by discussing it further!
18:25:12
phoe
"(I don't think it's easy to portably test with specialized arrays because not all Lisps support the same kind of specialized arrays.)"
18:26:11
phoe
Like, is it impossible to portably create a vector whose actual element type is an implementation-independent value that is not T?
18:26:35
phoe
Or rather, is it valid for an implementation to only ever support arrays of actual element type T?
18:27:28
specbot
Required Kinds of Specialized Arrays: http://www.lispworks.com/reference/HyperSpec/Body/15_abb.htm
18:32:13
Bike
honestly i would expect foreign-array-to-lisp to just take an element-type argument that defaults to t
18:32:38
Bike
you might want a char[] to become an (array (unsigned-byte 8)) sometimes, but an (array t) other times, you know?
18:33:51
phoe
yes, that's why he suggested (foreign-array-to-lisp pointer array-type &rest make-array-arguments)
18:37:28
phoe
Xach: thanks, http://lisptips.com/post/44261316742/how-do-i-convert-an-integer-to-a-list-of-bits
18:42:26
phoe
(with-foreign-array (ptr #(1 0 1 0 1 0 1 0) '(:array :int32 8)) (foreign-array-to-lisp ptr '(:array :int32 8) :element-type 'bit))
18:42:42
rpg
I feel like this should be a FAQ, but I'm not finding it -- is there an easy way to print an s-expression without package qualifiers, no matter what the value of *package* is and what the home packages of the symbols are ?
18:45:48
rpg
actually, looking at this, the s-expression is entirely made up of symbols, but probably that'
18:46:22
phoe
I copied the whole expression via COPY-TREE, then destructively traversed the copy to replace all symbols with gensyms of the same name.
18:49:58
rpg
that makes sense. Question: why do (make-symbol (symbol-name x)) instead of just collecting SYMBOL-NAME (and printing strings w/o quotes)?
18:51:07
phoe
PRINT will print them with quotes, and if you could print items with ~A selectively to print strings without quotes, then you would be able to print symbols without package prefixes.
19:04:45
phoe
Could I ask for a code review? I'm submitting a PR to CFFI that makes it possible to return specialized arrays from FOREIGN-ARRAY-TO-LISP. https://github.com/cffi/cffi/pull/128
19:06:56
Bike
and i guess you could put in a compiler macro on foreign-array-to-lisp to avoid apply, but it's probably not super important
19:11:31
phoe
Bike: dropped the thing, and hm. The compiler would probably be smart enough to notice whenever MAKE-ARRAY-ARGS is NIL, since (apply #'foo bar baz '()) seems like a very simple optimization.
19:20:41
asarch
Or just define a function with a big list of all available arguments it could handle: (defun foo (list(a 0 a-p) list(b 1/2 b-p) list(c "Hello, world!" c-p)) ... )
19:23:27
Bike
you can define generic functions, which do different things based on the classes of their arguments. you can't write definitions with different numbers of arguments of which only one is picked by the compiler, like in C++, though you can have a function that takes a variable number of arguments and does different things depending on the number.
19:23:50
dlowe
asarch: not sensibly. You could just define all your functions with a &rest parameter and do your argument parsing in the function body. It would be annoying and slow, though.
19:31:51
__rumbler31
phoe: do you have a commit published where you made your changes that resulted in the speedup? How did you figure out that the coercing was the cause, and what did you change to alleviate that?
19:36:27
phoe
I used SB-PROFILE:PROFILE to profile a few functions - the decompression function itself, the LZMA-DECODE function (CFFI callback), and then I extracted the COERCE into a separate function that was profiled as well.
19:36:55
phoe
The coercion consed up to half a gigabyte of memory and took almost all execution time.
19:37:42
phoe
__rumbler31: https://github.com/phoe/cl-lzma/commit/708e5b55fb527f481ef0607b04a5065da68ee955 this is a pretty big commit, but:
19:38:13
pjb
asarch: CLOS has defgeneric and defmethod, but this is only one way to do it. You can do it your own way with asarch:define-overloaded-function
19:38:25
phoe
https://github.com/phoe/cl-lzma/commit/708e5b55fb527f481ef0607b04a5065da68ee955#diff-8a9f675836b18f93ed95c87feba2a29bL128 - this is the offending call. Note that first FOREIGN-ARRAY-TO-LISP coerces from C memory into Lisp array of type T, and then COERCE coerces from type T into a octet-vector.
19:39:10
phoe
What I did was, I removed both converting from C into simple-vector, and then from simple-vector into octet-vector, by using a static vector.
19:40:00
phoe
I gave the C callbacks pointers into raw memory that is the storage of the static vector. This way, the C function can operate straight on the memory that is used by the static vector, and I can then use this inside Lisp code without any conversions or copying.
19:57:01
phoe
since you are creating an object that is a valid Lisp array and also contains a valid C array
20:55:10
jmercouris
I only have a VM, so I'm not sure how quick things are, and I'm getting some extremely strange GTK errors, not sure how much is my code or VM
20:56:52
jmercouris
jasom: If you're curious about the source code, it's all available for you to look at
20:59:17
jmercouris
jasom: Within utility.lisp comment out (defun start-swank ...) it should let you run it anyway with that in though
21:02:52
jmercouris
You also have to add cl-webkit to your local projects: https://github.com/nEXT-Browser/cl-webkit
21:06:37
phoe
https://www.reddit.com/r/Common_Lisp/comments/7npeab/ <- I made a HOWTO on installing emacs + ECL on Android via termux.
21:07:57
jmercouris
"Run apt install emacs for installing vim. Install spacemacs or download your favorite emacs configuration."
21:13:17
jasom
jmercouris: I'll get back to you tomorrow on this; I'm running gentoo, and currently not at my faster machine, so it will take a couple of hours to build webkit
21:14:29
jasom
I can build webkit in 65 minutes on my faster machine, but I'm on a 4 year old xeon right now, so it will take maybe 3 hours to build
21:15:54
jasom
jmercouris: make a deb that works on ubuntu and force everyone else to build from source; that's the Linux way :)
21:17:19
Shinmera
jmercouris: You might have luck with AppImage. Alternatively you can wait for me to get around to turning Portacle into a platform
21:17:26
jasom
jmercouris: that should work. If you want it to get upstreamed you'll have to do more (the author of pgloader did this)
21:17:56
jasom
jmercouris: easiest way to check your deps are right is to use an lxc or vm image of a fresh install and see if it works there.
21:18:24
jasom
for actually getting it upstreamed, the author of pgloader has a blog article about it
21:37:28
aeth
AppImage is probably the least popular of the three options (the other two are Flatpak and Snappy, with the third being Canonical's NIH)
21:39:19
aeth
GNOME is backing Flatpak and Ubuntu is backing Snappy, which is why they're probably more popular.
21:39:57
jmercouris
I'm sure that's an unpopular opinion, but I don't feel like packaging my app in 20 different ways
21:40:27
aeth
I'd personally go with Flatpak, then. It seems to be extremely available, e.g. Fedora installs it by default.
21:41:15
jmercouris
I'm mentally going to just making a deb file and letting everybody else figure it out
21:41:31
jmercouris
debian + ubuntu should cover most users I think, and anyone who doesn't use those distros should be used to installing from source
21:44:01
aeth
Personally, if I released packages, I'd just do a Flatpak and if someone wants to make distro-specific packages the community can do it themselves.
21:45:12
aeth
The old portable method was a shell script, usually installing things to ~ or ~/foo/ with an option to install them to /usr/local/
21:46:35
aeth
In checking, I just found out that I have a /usr/@DATADIRNAME@/ so that shows you how reliable such scripts are.
22:07:39
pillton
asarch: Specialization-store can do the kind of thing you are after: https://github.com/markcox80/specialization-store/wiki/Tutorial-2:-Optional,-Keyword-and-Rest-Arguments#rest-arguments
22:22:52
aeth
I guess it's just missing ABCL and CLISP support (fairly common due to how those implementations work) and Clasp support (also fairly common, due to Clasp not being complete yet)
22:23:37
aeth
Also, Mezzano (is it its own implementation?) and MKCL (probably the least supported implementation for libraries)
22:24:59
pillton
Well, I am only one person. It took nearly four years for SS to get where it is now.
22:25:55
aeth
Afaik, SBCL+CCL+ECL+CMUCL covers most people outside of the commerical Lisp world, and any large application will already have some dependency that doesn't support the rest.
22:28:01
pillton
ECL doesn't implement the environment API of CLtL2. The compile time functionality of SS requires it.
22:28:26
aeth
pillton: So how do you handle support on implementations that don't support it at compile time?
22:28:28
pillton
Implementing that and updating introspect-environment is a good project for someone.
22:31:48
aeth
I don't think I can use specialization-store until ECL supports introspect-environment, for portability.
22:32:30
Bike
and i thought specialization store still worked with no type information, it just did runtime checks?
22:32:35
aeth
The problem is the general advice right now is to try (1) SBCL, (2) CCL, (3) ECL in that order.
22:35:33
pillton
The function used in SS to perform discrimination is a binary tree. It isn't as bad as you might think. See the results in https://github.com/markcox80/template-function/wiki/Tutorial-1:-An-Introduction.
22:37:02
aeth
pillton: Would specialization store work for an inline function that takes either three arguments or one argument, where the one argument variant is actually based on the three argument version? I have two vector representations (multiple value and actual vector) and I implement all of my vector math as multiple value math, currently using %foo as the multiple-value name
22:39:33
aeth
pillton: Currently, I have (vec+ v1 v2) and (vec+-into! result-v v1 v2) that are both implemented based on an inline (%vec+ v1-x v1-y v1-z v2-x v2-y v2-z) and I do this for everything (vectors and quaternions) except matrices, where the multiple value approach is too inelegant
22:40:09
aeth
So e.g. there's also a %quaternion+ that takes in 8 values used as the basis of quaternion+ and quaternion+-into!
22:40:33
pillton
I have no general advice for mathematical problems. It is probably the hardest domain to design for.
22:40:44
aeth
The "%" is a bit of a misnomer because I actually mostly use them directly because I put a lot of my vectors in 2D arrays, where the row represents the actual value and I retrieve multiple values
22:41:40
aeth
pillton: Well, there are definitely higher-performance ways to do what I'm doing. I do the multiple-values approach because it gives me pure functions that do not cons. It makes the API really messy, though.
22:42:17
aeth
A naive-consing vector API will have 1/3 the functions that I have (naive consing, -into, and multiple-values)
22:43:02
aeth
Using the -into API winds up making everything look not very mathematical and more like a machine
22:46:13
aeth
Unless I move to SBCL-only SIMD, there isn't really a disadvantage to using the multiple value approach directly. It actually beats writing for the vectors themselves because it essentially is a loop unrolling via macro.
22:47:09
aeth
I'd like to make the whole thing cleaner, though. Having three versions that do the same thing with different tradeoffs is messy.
22:47:56
pillton
You always have that trade off. It is even worse with matrices since they can be row-major, column major or some general stride setup.
22:56:40
Shinmera
Things improve if you tackle domains that you have a good shot at improving. Linear algebra is highly likely not one of those.
23:23:41
aeth
At least for game linear algebra, it looks like the optimal data structure (e.g. if I were implementing it in x86-64 asm directly) would be a buffer of what SBCL calls simd-packs.
23:24:30
aeth
i.e. instead of an array of 32000 (sb-ext:%make-simd-pack-single)s, you do SIMD on offsets into a buffer that can hold the equivalent amount of data.
23:25:07
aeth
It could be kept compact by e.g. swapping the last valid element in the buffer with the one you're removing and then removing the last valid element in the buffer
23:26:08
aeth
If this scheme could work and the GC could skip over this chunk of memory, too, that would be amazing.
23:28:43
aeth
To be really worth it in a library one would have to (1) make it portable to at least CCL and ECL even if the portable part is a slower fallback in native CL and (2) add no foreign library dependencies
23:30:57
pillton
You could also consider adopting an API similar to LDB and DPB but for chunks of an array.
23:38:13
JonSmith
probably a set of VOPs and some sort of logic to trigger them when you're doing *special vector stuff*, its been a long time since i looked at the sbcl internals so i can't remember how the optimizer/emitter works
23:40:56
JonSmith
that would be a couple levels of abstraction up from what i'm suggesting, so presumably lisp code that does a dot product of two vectors that have type declarations
23:51:39
stylewarning
pillton: aeth: within a couple weeks we will be releasing our linear algebra library. It could use lots of work for the high level API, but the BLAS/LAPACK bindings are solid.
0:20:30
aeth
I definitely think the approach for game linear algebra (which is mostly math vectors of size 2, 3, or 4 (where size 4 can be quaternions that encode rotations) and square matrices no larger than 4x4) shouldn't be to rely on anything automated on the compiler's side, and should work with buffers rather than with individual objects.
0:22:16
aeth
At the moment, I use 2D arrays with row sizes of 3 or 4, but 1D arrays with a special aref could also work. This approach is treating each float individually instead of together like SIMD, though.
0:23:03
aeth
pfdietz: I think creating a special type in an extension that works portably over SBCL, CCL, ECL, and maybe CMUCL would probably be the direction to go, rather than doing it automatically.
0:23:49
aeth
Alternatively, just supporting SBCL with the fast path and using 1D arrays of single-floats for everything else.
0:24:42
pillton
Something like (simd op dest src1 ...) where sources could be a simd-pack or a (chunk <array> <index>).
0:24:57
aeth
For games, the heavy lifting of the rendering is done on the GPUs, but there's plenty of things that are done on the CPUs. The most intensive is probably game physics, some of which can be done on the GPU, but then that limits how pretty you can make the game look. GPU physics has kind of fallen out of favor, actually, because of this.
0:26:19
aeth
pillton: I don't quite like simd-pack because an array of simd-packs isn't going to be as ideal as working with offsets into a buffer, at least for data structures used in games.
0:28:21
pfdietz
How would this fit with machine learning, which is another big user of those coprocessors these days?
0:29:03
aeth
Well, I was thinking about this from the perspective of games. Games don't need to have extremely optimized linear algebra. The very heavy lifting for rendering is done on the GPU and for the rest, it's more about hitting seconds per frame targets like 0.0167 or 0.01 or 0.005
0:29:20
aeth
So it should be easier to reach acceptable performance with game libraries, before more general purpose linear algebra.
0:31:04
aeth
And that data, after the game logic processing, is going to end up in (ideally preallocated) static-vectors sent via OpenGL to the GPU
0:39:33
aeth
For GPU computation, I'm optimistic about SPIR-V eventually being the one-size-fits-all solution for compiling shaders expressed as a mini-Lisp language that can work in OpenGL, OpenCL, and Vulkan. Eventually. It will take a while for it to be supported widely enough.
2:11:05
cryptomarauder
ugh, instead of going away like I prayed for in early 2000 it's only gotten bigger. Why is java a thing still?
2:13:12
krwq
hello, is it possible to force #'read to not read reader macros? i.e.: (with-input-from-string (inp "#.(+ 2 3)") (read inp)) => 5 - I'd like it to report the error instead
2:18:04
cryptomarauder
and make sure your safety and debug optimization qualities are set accordingly to what you desire as well
2:18:44
aeth
Anyone who makes a Lisp implementation for Intel CPUs (or otherwise deals with syscalls on Intel CPUs) might be interested in this rumor of a major slowdown in an upcoming kernel patch: https://www.reddit.com/r/hardware/comments/7nngqd/intel_bug_incoming/
2:27:03
pjb
krwq: it is possible: (let ((*readtable* (com.informatimago.tools.reader-macro:remove-all-macro-characters (copy-readtable nil)))) (read-from-string "Hello")) #| --> hello ; 5 |#
2:28:13
pjb
aeth: again, as I always say: write it in fucking lisp, don't do FFI! (syscall are FFI on non-lisp-machine OSes).
2:29:46
krwq
pjb: but with remove-all-macros-characters I assume also comments etc. wouldn't work, right?
2:30:12
pjb
Well, it won't signal an error: (let ((*readtable* (com.informatimago.tools.reader-macro:remove-all-macro-characters (copy-readtable nil)))) (read-from-string "#.(+ 2 3)")) #| --> |#.(+| ; 5 |#
2:31:23
krwq
pjb: I think I'm fine with parsing everything as long as it won't execute anything by default...
2:31:39
aeth
pjb: Except Mezzano is usually run in a VM and not bare metal, so even most Mezzano instances are affected
2:35:08
pjb
krwq: if you have untrusted data to parse, then do that: parse it! Don't use the lisp reader for that; as you can see, the bare lisp reader only gives you symbols, integers and floating-point numbers (and what makes an integer or a symbol is dependent on *read-base*). You can play tricks with reader macros, but it will always be easier to write and use a normal parser function.
2:36:16
aeth
My favorite reader macro is #4f(+++++++++[>++++++++<-]>.<+++[>++++++++<-]>+++++.+++++++..+++.>++++[>++++++++<-]>.<<<+++[>--------<-]>.<+++[>++++++++<-]>.+++.------.--------.>>+.<++++++++++.)
2:36:48
pjb
(#4f(+++++++++[>++++++++<-]>.<+++[>++++++++<-]>+++++.+++++++..+++.>++++[>++++++++<-]>.<<<+++[>--------<-]>.<+++[>++++++++<-]>.+++.------.--------.>>+.<++++++++++.))
2:38:27
aeth
#4f means it creates a brainfuck machine of size 4. Although technically they should be infinite, it wastes space
2:39:55
pjb
Libraries should provide the reader macro functions, not bound reader macro characters. (they may provide a macro to bind them, but let the user do that).
2:44:32
aeth
The problem with BF in general is that... it's hard to build data structures that aren't 0-terminated without copying everything and moving the world around with you. So I suspect adding a calling and return convention probably wouldn't be useful.
2:45:56
aeth
I was playing around with a potential string representation, but real strings (as opposed to C-strings) need a length prefix and all that metadata essentially needs to move with the world because Brainfuck only has relative offsets, so you can't leave it behind.