freenode/#lisp - IRC Chatlog
Search
20:20:05
Mqrius
If only everyone had stuck with 8 bit punchcards then we wouldn't have had this problem!
20:26:37
aeth
(1+ (integer-length most-positive-fixnum)) should be the first thing everyone does on a fresh Lisp install. If it's < 32, it's probably 32-bit. Elseif it's < 64, it's probably 64-bit. If it's >= 64, I'm jealous of your computer and/or your Lisp.
20:29:28
aeth
Hmm, that doesn't actually work as universally as I thought it would. The ABCL I have installed gives "32" when I do that.
20:29:55
aeth
_death: There is a most-positive-bignum, and you'll find out when you try to get past it.
20:39:33
phoe
though that is for a simpler case, "https://github.com/phoe/cl-furcadia/blob/master/date-parser/date-parser.lisp"
20:42:31
Xach
https://github.com/xach/usenet-legend/blob/master/date-parser.lisp is what I just copied.
20:45:14
dlowe
rebol has a parser generator built-in to the language, and I'm starting to think that's admirable. So many problems are parsing problems.
20:49:26
didi
dlowe: So much of what we do is dealing with ourselves. I think of it like an expanding sphere: the volume grows faster than the surface.
20:49:58
Petit_Dejeuner
lol, even the Chinese come in here to ask about the future/popularity of lisp
20:51:43
random-nick
also, how did that chinese person get here in the first place without knowledge of english?
20:55:10
aeth
I'm on Fedora and Fedora basically updates it with the distro and never again (so currently 1.4.2 because Fedora is roughly every 6 months and the upgrade was recent).
20:55:21
aeth
Fedora does that sort of thing with some languages (perl, python, ruby), but not with Lisp (yet)
20:55:52
aeth
The most annoying one is with LaTeX, whose package manager apparently is set up to prefer hundreds/thousands of tiny packages
20:56:01
Petit_Dejeuner
cl-split-sequence, cl-sql, cl-sqlite, cl-trivial-backtrace, cl-trivial-garbage, cl-uffi, cl-uffi-tests, cl-unicode, etc
20:56:26
cgay
I'm not an expert but those do look like traditional Chinese characters, so maybe Taiwan.
20:57:22
phoe
didi: I don't doubt they're nice, I don't doubt that some people who do Linux more than they do Lisp find them useful
20:57:30
Petit_Dejeuner
There's some old airport passenger routing algorithm running on an old debian server that uses apt-get for cl-split-sequence or something.
20:59:38
aeth
I think any properly-run distro is probably going to essentially never update the Lisp installations except during new distro releases (well, unless it's a rolling release distro). Especially SBCL. They tend to have a lot of "minor incompatible change"s, but I guess a good packager would update until the first one. http://www.sbcl.org/all-news.html
21:01:17
aeth
I'm not sure that's very feasible when you start pulling in Lisp, Python, Ruby, Perl, etc., packages from the language package managers, though.
21:07:02
rme
I have the impression that it is usual in python to use a virtualenv (and pip) to isolate yourself from the system python libraries installed by the system package manager.
21:08:16
aeth
Distribution-packaged libraries should basically just be there to handle dependencies for distribution-packaged applications that are written in that language.
21:27:19
rme
where "stuff" is a substitute for another word that starts with "sh" and ends with "t" and has an "i" in the middle.
21:30:28
pjb
rme: the problem with python is different; there is not a single python, but 3 or 6 different pythons. Different languages. They use the same name and same library names, mostly.
21:32:37
aeth
pjb: The big problem with Python is 2 vs 3. It's finally now being resolved 10 years later, but I still run into issues with it. e.g. A program I used to compile with scons now no longer compiles with scons because the scons in my distro is now using Python 3, but the program's stable branch hasn't been (and afaik won't be) updated for that scons.
21:33:23
aeth
Python would probably have been word-dominatingly popular (its strategy of being the newbie language is a good one) if it wasn't for 2 vs 3 issues for literally 10 years now.
21:36:11
Rawriful
I still think in terms of popularity. Java has the right approach with backcompat. Purely due to the fact that business prefers it. A business has nothing to gain from having to port it's application to a new version of a lang.
21:36:53
aeth
pjb: Right, but Python 2.7 code probably won't run in Python 3. Python 3.4 code will run in 3.6, it's just that they want to use 3.6 features. Afaik.
21:38:00
Rawriful
phoe: imo yes, if there was an initial significant commercial hold with cl it would be a lot stronger. But the other thing about java is that it's quite difficult to make bad code *really* bad due to it's lack of expressionality.
21:38:16
aeth
Although, actually, it's not that big of a deal in CL. CL *could* have breaking changes because of how it's structured. e.g. Just have new programs (:use #:cl2019) or something instead of (:use #:cl) and there you go. Problem solved.
21:39:54
phoe
companies prefer dumb languages because it seems that it's harder to write unmaintainable code in them
21:41:03
aeth
phoe: Well, I think what they want is replacable, cheap employees even if they need more. The Common Lisp approach is aiming more for smaller teams that are paid more per person.
21:41:31
Rawriful
phoe: I think yes but I think how bad that gets is more limited in java. and sort of what aeth said as well.
21:41:45
White_Flame
I've been reading some 1960s LISP code (yes, all-caps to be time-appropriate), and man it's a tangle of GO and COND
21:42:21
aeth
(Of course, you can write Java in any language, and you can definitely do that in CL if you ban a few things like defmacro.)
21:42:30
phoe
Rawriful: except if you write unmaintainable code in Java, people might not notice and might even get used to it, and it may even pass reviews beacuse it's so huge and so God damn tangled with all the files, classes and classes of classes.
21:42:51
phoe
it's harder with Lisp because you have much less code to begin with and it's harder to hide shit in there.
21:43:29
Rawriful
phoe: Oh I'm totally not saying it's impossible, but I've seen some shit and one of my reactions have basically been. "I am so glad that this person did not know about macros"
21:43:35
phoe
and if you do really weird shit like readtable modification, it immediately catches a reviewer's eye.
21:44:02
phoe
Rawriful: I've seen some Lisp shit too. (Un)luckily it was just a person trying to write C in Lisp.
21:44:27
phoe
But it is also nice to ask the person if it works, and then, if it works, show them how this can be reduced to idiomatic Lisp.
21:44:29
Rawriful
phoe: I work in scala and maintain a codebase that is basically written like it's js.
21:44:43
aeth
Petit_Dejeuner: Universities are like giant enterprises in their tech choices and move about as slowly as them.
21:45:08
aeth
(I wonder what percent of university tuition money goes to overpaying for Oracle stuff.)
21:45:59
Petit_Dejeuner
aeth: That's my point. For every open source code base ported to 3, there's several university courses, internal software at a corporation, and research numpy/scipy scripts in 2 that will never be ported.
21:46:42
aeth
Petit_Dejeuner: Yes, but it has finally, after 10 years, gotten to the point where 3 is inevitable. Perl 5 will probably never be replaced by Perl 6. They're even more different.
21:47:39
aeth
Petit_Dejeuner: Meanwhile in CL, a new breaking specification *would* just require (:use #:cl2019) instead of (:use #:cl) unless they changed how packages or the reader worked.
21:48:04
phoe
Why JSON schemas? This could have been done more enterprisefully by using XSLT transforms and SOAP.
21:49:01
phoe
are people who write code like this doing, uh, what, are they trying to ensure a job for life or something
21:49:01
Rawriful
it is, but in a static language, you don't need json schemas to validate your types.
21:49:05
White_Flame
aeth: in theory, it could swap out reader macros or whatever on toplevel (in-package ...)s as well
21:50:28
White_Flame
XML just has that stupid child vs attribute problem, though, which isn't very sexpr-like
21:50:35
phoe
aeth: I know, but XML is way too verbose. We don't need each closing paren to state which opening paren it closes.
21:50:47
Rawriful
I actually quite like xml, I think some of the biggest problems with it were the parsers people generating for schemas had bad error warnings where that really shouldn't be the case.
21:51:25
White_Flame
and XML has no native datatypes besides text, either. And supports mixing of "body text" with tags
21:51:26
Rawriful
that and people started to try and program in xml which is obviously not going to end well
21:52:00
Rawriful
White_Flame: I thought you could specify numbers in schemas and date patterns etc?
21:58:05
aeth
White_Flame: I think the most difficult but potentially desirable change in a hypothetical future CL would be case sensitivity instead of upcasing, but that would break a lot of code that assumes 'foo is the symbol represented by the string "FOO", and that "-BAR" would generate 'foo-bar and not 'foo-BAR
21:59:07
White_Flame
in our communications platform, we even explicitly made symbols case-insensitive
21:59:31
aeth
White_Flame: I'm just using this as an example of something that can't be easily fixed just by using a package other than CL
21:59:31
White_Flame
they should be simple keywords, not carry meaning in their character-by-character syntax
21:59:37
phoe
the above would mean that people would come in and start coding things in caseLikeThis or CaseLikeThis instead of traditional-kebab-case
22:00:58
phoe
"don't warn me about code that I broke purposefully to ensure myself a maintenance job for life"
22:03:47
aeth
You can program Java in any language. I've seen a Java program written in Python before.
22:08:01
aeth
phoe: If Common Lisp ever became popular, there would probably be an Eternal September moment where the community could no longer keep up with the influx of new users, and we'd see things like camelCase and )s on their own line.
22:09:04
Mqrius
Ah, but then we'll let them have their Commoner Lisp, and we'll write proper ANSI Common Lisp
22:10:11
aeth
Shinmera: Sorry, I mean, we'd see things like that as the *majority* rather than an outlier
22:11:13
aeth
The only large community I know of that has a very uniform style is Python's, with their PEP 8 that they constantly point to and that even Python linters can check for.
22:14:18
aeth
Code with tabs is imo archaic. These days, most code is read online without ever being downloaded, via websites that have terrible tab settings (8 spaces and other ridiculous things like that)
22:15:25
aeth
So people don't get the indentation level they want with tabs, they get pretty much the worst (too much indentation) with tabs
22:16:51
yggdrasil_core
I always used spaces because most of what I write is C and I like to align things, and a proper editor makes navigating groups of spaces work like tabs
22:17:24
aeth
Emacs's default for languages like C iirc is to mix tabs and spaces. Tabs for indentation level, and then spaces for the final alignment.
22:21:48
aeth
One of the few style disputes I've seen is ";;;;" vs ";;;" for in-file comment headings. The top of the file heading is ";;;;" and top level comments for things like defun are ";;;" and internal line comments are ";;" and end of line comments are ";". But ";;;; Foo Section" vs. ";;; Foo Section" in the middle of a file is an area where you can have a style argument.
22:23:16
Shinmera
The great thing about standards is that the only good ones are the ones you choose
22:27:37
Shinmera
According to https://developer.twitter.com/en/docs/tweets/post-and-engage/api-reference/post-statuses-update "Whether or not to put a pin on the exact coordinates a Tweet has been sent from."
22:33:45
Mqrius
Let's make a true final universal lisp style though. I mean, lisp is great and all, but there's so many brackets! If we just move them away a little, everything will look much cleaner, like python! Take for example this fibonnaci function. Isn't it beautiful? https://pastebin.com/5YkNS2Da
22:37:17
Mqrius
Hmmm yes, and the style should probably be enforced rigorously. I'm thinking the compiler deletes your file if it's wrong, and tells you to do it over.
22:38:22
jasom
we should obvioulsy abandon s-expressions in favor of the more flexible yaml notation for lists
23:54:40
stacksmith
Great. I am having trouble figuring out the standard about arguments that are not symbols.
23:55:24
Bike
arguments can be anything, of course. in (+ 2 3) the arguments are numbers. do you mean parameter names?
23:58:28
stacksmith
By parameter I mean lambda-list description in the definition. So if I have (blah &key q &allow-other-keys), there is what I refer to as a keyword parameter :q. However, when I later call this function, the argument q may be actual :q (blah :q 1) or something like (blah (some-function-that-returns :q) 1)
23:59:32
stacksmith
So I was getting SBCL warnings about non-symbol arguments weakening checking... But now I can't. So I tried to figure out what the standard says, but got even more confused.
0:00:03
White_Flame
because of APPLY, keyword values need to be able to be runtime scanned from a generic argument list
0:00:12
Bike
indeed you can have the keys be returned from calls or whatever. it's just very unusual to do so, so sbcl complains a little.
0:06:25
White_Flame
oh, I wasn't referring to a specific part of the spec, just what it would take to APPLY a function that took keyword args
0:07:25
White_Flame
if you use DISASSEMBLE in SBCL, you'll often see something like "non-keyword parsing entry"
0:07:56
White_Flame
so there's another path that walks through the arguments one by one at runtime, seeing where in the keyword list things match up
0:08:41
White_Flame
so that needs to be resilient to dynamically passed argument lists (like APPLY) as well as computed keywords, by necessity
0:10:13
stacksmith
That is entirely sensible. If I could only figure out how I got this warning before, I'd feel a whole lot better.
0:12:49
Bike
compile-filing "(defun foo (&key a) a) (defun bar (b) b) (defun baz (x) (foo (bar :a) x)) does it fo rme.
0:17:31
Bike
"The first argument (in keyword position) is not a constant, weakening keyword argument checking."
0:22:08
jmercouris
Shinmera: In case you are interested, changing it to define-command fixed the indentation somehow magically :D
0:22:30
jmercouris
Shinmera: Seems like indentation on something that doesn't meet those prefixes that you listed results in improper indentation
0:36:40
stacksmith
_death: by comment, do you mean a blog entry somewhere? I am morbidly curious about things that seem to be useful, but usually are a bad idea, like doubly-linked lists...
0:39:29
stacksmith
I've had at least three situations where I thought I needed a doubly-linked list, but after writing gobs of code I always trashed it and went back to conses...
0:42:28
pjb
In those languages, it's customary to use intrusive linked lists, because of typing. (to avoid casting).
0:43:48
stacksmith
Yeah, intrusive lists are another problematic thing. It often seems like a good idea - the objects then have direct access to the remaining list, but it inevitably leads to reinventing the wheel as a good chunk of CL becomes unusable...
0:51:43
jasom
pjb: there are other advantages of intrusive lists besides avoiding casting; C++ can do non-intrusive lists via templating but people still sometimes choose to use intrusive lists
1:00:28
stacksmith
In my case, I wound up with a shitton of code in every case. Linkage maintenance is problematic. Many tricks that work with lists don't work in both directions. And mostly, all the list operations no longer work.
1:04:19
jasom
rme: the generic term is "intrusive data structure" or "intrusive container" it's where your container and data are implemented in the same structure. So for e.g. a list of two dimensional coordinats, instead of having a list containing structures with X and Y fields, you'd have a structure with X, Y, and Next fields.
1:06:08
jasom
The primary advantage comes from the fact that a new element needs only a single allocation, and that the container data and payload data have spatial locality.
1:06:57
pfdietz
This reminds of utility fields that get used in algorithms. For example, a mark bit used by graph traversal algorithms.
1:08:03
jasom
pfdietz: right it's intrusive in the sense that it breaks abstractions, so you lose the benefit of the abstraction but also lose the cost.
1:08:40
pfdietz
I've wanted cl-containers to support a WITH-MAPPING macro, that allows an algorithm to use a utility field in the container's objects. If someone else wants to use that field the WITH-MAPPING macro would just hand them the slower non-intrusive map instead, transparently.
1:09:02
jasom
_death: well if you wanted points that could belong to two containers, you can have two next pointers! (Yes I've seen this in kernel code).
1:12:16
jasom
modern architectures can encourage this a lot both because indirection can be expensive *and* the only thing that matters for memory performance is the number of cache lines you access, not the number of bytes.
1:15:50
jasom
conversely I've seen a structure that was the size of a cache line and held a single word value only (plus padding). This was so that an array of them could be efficiently accessed by multiple CPUs with unshared L1 caches.
1:36:16
rme
I think it was in the context of a (concurrent) ring buffer, where I wanted the read and write pointers to be in different cache lines.
1:43:22
rme
I was trying to see if it was possible to keep up with reading packets from a 10 gig ethernet interface, and I didn't want consumer processes to have to contend for the cache line containing the write pointer, which was being continually bumped by the producer.
1:45:57
rumbler3_
how were you able to specify that different processor's cache lines would have specific data
1:49:54
_death
you make sure each item is properly aligned and of a cache line's size, and have each processor access a different one?
1:55:09
emaczen
what would be faster writing to a vector and then writing to a file, or writing to a string-output-stream and then writing the string-output-stream to a file?
2:34:10
pjb
rme: I/O is when you access secondary storage, not when you write in core memory whatever the API you use to write this core memory.
2:57:15
stacksmith
Bike: White_Flame: re: non-symbol keyword arguments... What do you think of 3.5.1.5 Invalid Keyword Arguments: "It is not permitted to supply a keyword argument to a function using a name that is not a symbol."?
3:00:41
Bike
the function is a function. it doesn't know or care about how what it's been passed was evaluated
3:04:29
pjb
stacksmith: (flet ((foo (&rest r &key) r)) (let ((arguments '(1 2 3 4))) (apply (function foo) arguments))) #| ERROR: Incorrect keyword arguments in (1 2 3 4) . |#
3:05:33
pjb
stacksmith: (flet ((foo (&rest r &key &allow-other-keys) r)) (let ((arguments '(:a 2 foo 4 #:bar 3))) (apply (function foo) arguments))) #| --> (:a 2 foo 4 #:bar 3) |# this is conforming.
3:05:54
pjb
stacksmith: (flet ((foo (&rest r &key &allow-other-keys) r)) (let ((arguments '(1 2 3 4))) (apply (function foo) arguments))) #| --> (1 2 3 4) |# this is NOT conforming! it happens to work in ccl, but it could fail.
3:07:45
pjb
I still think it would be fun and worth the work to implement a strictly conforming CL implementation that implements things in the most unexpected way…
3:12:10
pjb
But remember, you can always just write &rest r, and then parse r in your function as you want.
3:15:58
pjb
for example, you could write: (flet ((foo (&rest r) (destructuring-bind (&rest rr &key a b &allow-other-keys) (loop while (and (cdr r) (symbolp (car r))) collect (pop r) into ks collect (pop r) into ks finally (return ks)) (list rr r)))) (let ((arguments '(:a 0 foo 33 1 2 3))) (apply (function foo) arguments))) #| --> ((:a 0 foo 33) (1 2 3)) |# and it's perfectly conforming.
3:16:07
stacksmith
pjb: Are you sure it's not conforming? I seem to remember somewhere in the spec saying that &rest gets all arguments, and &key picks from that... So a null &key list may not really violate the spec.
3:16:44
borei
continue to work on matrix multiplication optimization, trying to use build-in options - like type declaration and optimiztion
3:18:11
pjb
borei: you can instead write: (unless (= …) (error …)) (rest-of-the-body) so you don't accumulate the sexp levels.
3:18:20
stacksmith
pjb: but, are these keyword arguments? I could make an argument that they are &rest arguments, and &key and &allow-other-keys does nothing here.
3:19:11
borei
in https://pastebin.com/bYpSRBP4 i need some help with understanding starting from line 6
3:19:33
pjb
stacksmith: that's the point, when you have both &rest and &key then the structure of the rest must be that of key value key value…
3:20:36
aeth
It looks like in the disassembly of the allocation of a 2D array in SBCL, there are four allocations, instead of 2 for the equivalent 1D array. I think that might be because initial-contents is only optimized if flat.
3:20:40
pjb
borei: it would be better to ask in #sbcl, but basically it says that it doesn't know what type the slots in elements are.
3:21:18
jack_rabbit
aeth, That's surprising. From what I thought, the 2D arrays were backed by a 1D physycal array.
3:21:30
pjb
borei: I would assume that the compiler is able to compile (aref 2darray i j) with way faster code than (aref 1darray (+ j (* i row))).
3:21:37
Bike
on sbcl, a multidimensional array is a "header" sort of structure with a boxed single dimensional array
3:21:50
aeth
There should be no problems with 2D arrays in SBCL after the arrays are allocated. They're just 1D, and the compiler should be able to handle aref in an optimized way if the type is known (not knowing the full type, especially the dimensions, is probably going to be very expensive)
3:22:17
jack_rabbit
iqubic, I know, That's how I knew the answer so quickly. I needed it once and it doesn't exist.
3:22:54
aeth
But if you're concerned about memory, you should not only use 1D arrays, but use ranges within that 1D array, and allocate a bunch of matrices at once.
3:24:11
stacksmith
pjb: It actually says that &rest must be followed by a lambda-list-keyword, which leaves &key, &env and &aux...
3:24:29
pjb
iqubic: all the computable function already exist in CL. You only have to find their name. (lambda (lol) (apply (function mapcar) (function list) lol)) is the name of the function you want.
3:24:59
borei
pjb: i didn't get that - "but basically it says that it doesn't know what type the slots in elements are"
3:28:07
pjb
Also, you see, this is a problem of using vectors instead of 2d arrays: now you have to tell the compiler that your index computation returns an index…
3:29:15
aeth
pjb: SBCL doesn't trust the (except maybe at safety 0), it has an internal truly-the that you should never use.
3:29:33
pjb
(since you use it to index an array, it should be a fixnum, so it probably is one, so it should be safe enough to use THE: if the indices or matrix sizes were too big, you'd have problems allocating them).
3:32:45
pierpa
borei: a more basic question is why you believe that your handrolled matrix will be faster than system supplied 2D arrays.
3:33:31
aeth
pierpa: It's possible that some 2D arrays in some implementations aren't very efficient because they're not very common.
3:34:17
pierpa
aeth: in theory there can be implementations that puposefully slow down acces to 2D arrays, yes. :)
3:36:02
pierpa
I doubt that in such an implementtion a naive user implementation will be faster, anyway
3:36:09
aeth
You don't have to purposefully write slow code, usually it's the other way around and slow code happens when you're not paying enough attention.
3:37:02
borei
im solving one problem after another, i know that constant parts of the indexes can be moved out of the loop
3:37:10
aeth
If you're doing linear algebra in pure CL, you probably want to use SBCL, though. (At least, that's the case in 2018.)
3:38:05
stacksmith
borel: keep in mind that specifying types in SBCL generates much smaller code with optimization on, but generally bigger code with it off as it treats type declarations as requests for extra checks. And of course, thinking about optimization before you need to is a sin.
3:39:04
aeth
stacksmith: But it's still generally a win for arrays and numbers because with that type check, it at least knows to use efficient, inline code instead of not-inline, type-generic code.
3:39:35
aeth
And with arrays there's the added benefit that if you have the full type information (including the dimensions) it can get rid of bounds checks
3:41:29
aeth
I think the only time you need to worry about types is with arrays, sequences, and numbers.
3:41:34
stacksmith
aeth: can't argue with that. If it matters, of course. I find more often than not that it doesn't matter because I wind up rewriting things that I care about a few times to get the algos right.
3:42:39
pierpa
for a matrix multiplication it matters, and there's not much room for algorithmic wiggles, unless he wants to go real fancy
3:43:07
aeth
It probably really depends on what you're doing. If you're working with arrays of floats, probably the most you'd have to change later (other than dimensions) is s/single-float/double-float/ or s/double-float/single-float/
3:43:52
aeth
(Although if you're really writing efficient double-float code you do sometimes have to write some hacky workarounds to avoid consing.)
3:45:57
aeth
(let ((foo (the fixnum (foobar 42)))) (+ foo 42)) is very similar (if not identical) to (let ((foo (foobar 42))) (declare (fixnum foo)) (+ foo 42)) and in either case, tells the compiler that it can assume foo is a fixnum (it can also completely ignore it, or type-check it)
3:46:25
aeth
The advantage of the is that you don't need the variable, so you can just say (+ (the fixnum (foobar 42)) 42)
3:46:50
aeth
The disadvantage is you probably want check-type instead of the or declare. It will more portably do what you probably want.
3:47:03
stacksmith
borel: Are you talking about constraining the result? Just return (the fixnum whatever). And for indices, use (the (integer 0 99) ...) or whatever the array size is...
3:47:59
stacksmith
When adding or multiplying, use (unsigned-byte ...) or (integer ..) of a bitsize that fits into a fixnum after the operation.
3:49:09
aeth
Of course, "undefined" (with the or type declarations) doesn't mean "ignore it, type check it, or assume it without checking". That just happens to be what you will probably encounter. It could also sell all of your Bitcoin or tweet it to the world if the type doesn't match. Or literally anything else that a computer can do.
3:49:31
stacksmith
Also declaring the array itself as a vector or simple-array generates much tighter code with sbcl.
3:50:45
borei
from what i read make-array will create simple-array unless you specifed :fill-pointer and/or one more parameter
3:51:21
stacksmith
borel: true enough, but I think you still need to declare it in functions that use the array.
3:53:18
aeth
You should probably use specialization-store for this. https://github.com/markcox80/specialization-store/
3:53:35
aeth
The alternative is making n versions of the exact same thing, just with different types.
3:54:34
stacksmith
Also, if you are really optimizing, you may be able to arrange the loop better using tagbody - sbcl loops sometimes jump around weirdly...
3:54:41
aeth
Or you could just inline something and hope the type information is available where it is being used. But some things produce really massive functions when disassembled, e.g. many matrix operations. So those might not be a good choice for inlining.
3:55:32
aeth
stacksmith: I'm not sure that there are many cases to use tagbody directly instead of do (do even has an implicit tagbody for when you really need it, but good luck getting that to auto-indent correctly because few people use that feature)
3:57:01
aeth
An alternative to using loops for arithmetics is to use macros to essentially loop unroll. I'm not sure how many iterations it would take before this is a bad idea. It definitely is ideal for small numbers of iterations like 2, 3, 4.
3:59:09
aeth
Oh, btw, on matrix multiplication in general: https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm
3:59:34
aeth
"Unsolved problem in computer science: What is the fastest algorithm for matrix multiplication?"
4:00:26
aeth
stacksmith: The majority of matrix libraries in CL are graphics libraries, so 4x4 matrices (and possibly smaller ones)
4:02:08
aeth
Are there many native math libraries that aren't CL graphics math libraries? Most, I think, are wrappers.
4:02:18
borei
in quantum mech - there is only matricies and vectors, and they are with complex elements
4:02:37
aeth
The Maxima CAS is interesting. It has Fortran, and it has f2cl. So I think it might compile all of its Fortran into CL!
4:04:35
aeth
In SBCL: (upgraded-array-element-type '(complex single-float)) => (complex single-float)
4:05:22
aeth
Now I think the next question is if they can be worked with in a non-consing way, just like arrays of double-floats.
4:06:52
beach
rme: Bordeaux is a great place to live, but I don't think you should come here counting on the students.
4:07:49
aeth
And it looks like it's not non-consing in SBCL: (defun foo (v) (check-type v (simple-array (complex double-float) (3))) (incf (aref v 0)) v) (disassemble #'foo)
4:09:12
aeth
By comparison: non-consing in SBCL: (defun foobar (v) (check-type v (simple-array double-float (3))) (incf (aref v 0)) v) (disassemble #'foobar)
4:09:42
stacksmith
borel: I find that when using type declarations and the... turning optimization off, sometimes generates helpful messages about dubious type declarations.
4:10:23
pjb
borei: look how slower your solution is (even with the constant subexpressions out of the loops) than indexing 2D arrays, in ccl: https://codeshare.io/2KwP3z
4:11:30
iqubic
Is there a way to take a dynamically bound string, and set it to the value of my chosing?
4:13:33
pjb
iqubic: strings may be mutable or immutable. When mutable, they may be adjustable or non adjustable. If they're not adjustabnle, then you can mutate them but not their length; if they're adjustable, you can mutate them and mutate their length. Whether the string is bound, and whether it's bound lexically od dynamically are irrelevant.
4:14:31
pjb
iqubic: notice that in general, one avoids to mutate strings, since all the difficulty is knowing whether it's an immutable or a mutable string!
4:16:20
pjb
(let ((foo (make-string 3 :initial-element #\a))) (print foo) (replace foo "bar") foo) #| "aaa" --> "bar" |#
4:17:18
pjb
borei: try it with small n and big n (eg. n=2, n=3, n=4 and n=20 n=30 n=40 and n=200, n=300, n=400).
4:17:24
aeth
borei: your matrix isn't going to be anywhere near as efficient as built-in 2D arrays because you're using CLOS
4:17:47
aeth
borei: I don't see how a hand-made matrix in CLOS is going to be more efficient than built-in matrices unless you're using a JITed CL
4:18:53
pjb
ebzzry: using a library that gives you access to x11 functions to manipulate the x11 clipboard. Hence #x11
4:22:15
pjb
stacksmith: by reading a literal indeed: '("abc" #.(vector 1 2 3) #.(make-array '(2 2) :initial-contents '((1 2) (3 4))))
4:28:15
stacksmith
pjb: re immutable arrays: What you describe seem to be just literals. SBCL will actually allow modification with a warning - not that it's a good idea. Is there something in the standard?
4:28:45
aeth
borei: (This is just my opinion.) With CL, no need to have any numbers at all. Write two equivalent ways of doing something. Then use disassemble. If the disassemblies are identical (as they often are) there is no difference. If the disassemblies are very similar, you might be able to notice the differences. Only if they're very different do you need to benchmark.
4:31:46
aeth
I would not be surprised at all if someone has already written a disassemble diff tool for this.
4:33:31
rme
I'll be in town starting on the 23rd. It'd be nice to see you if you're available sometime.
4:33:45
beach
You need to act quickly. Prices of real estate are rising fast because of the increased popularity of the city in recent years.
4:33:48
k-hos
I really do like sbcls disassmble feature, aeth, I used it the other day to confirm that macros don't have any overhead (at least simple ones)
4:34:26
aeth
k-hos: My favorite part about SBCL's disassembly is that it comments the potentially interesting things, such as constants and allocations and function calls.
4:36:04
rme
beach: From Feb. 23 through March 6th. Not sure where I'll stay; some airbnb place (or places). I'll be looking around for a place to rent for a year to start with.
4:37:21
aeth
(And I use disassembly not to write low-level code but to see how high level the code can get away with being.)
4:37:47
beach
rme: I *might* be busy until end of February. A visitor might be here. After that, I'll be available.
4:40:45
rme
Nothing beats coming to look around in person, I am thinking. Hopefully, I won't be totally priced out.
4:51:59
aeth
Bike: Should I move my length 16 array to a (4 4) array for my 4x4 matrices? You seem to know more about this than anyone else in the previous conversation.
4:52:26
aeth
It does look like SBCL can declare them dynamic extent, so they do seem to be useful enough for my uses.
4:53:48
aeth
It looks like dynamic extent still works with (list (list ...) (list ...) (list ...) (list ...)) for the initial contents (which I assume an inline make-matrix-4x4 would produce)
5:11:08
aeth
In the length 16 version it's all MOVSS, MULSS, and ADDSS. In the dimension (4 4) version, it has a lot of MOV instructions added
5:12:11
beach
borei: I suggest you introduce a variable using LET that you sum into. That way, you can declare the type of it.
5:12:47
beach
borei: It is complaining that when you use the SUM LOOP keyword, the type of the implicit variable it sums into is not known to be DOUBLE-FLOAT.
5:14:50
aeth
Bike: It's probably slightly slower, but it's really hard to tell because my computer is too fast and any loop will optimize it away
5:14:53
beach
borei: Well, you have the LOOP keyword DO at the end of the line instead of at the beginning.
5:19:24
beach
borei: You can "play with" the variables first, but you should work on the code layout before resubmitting.
5:39:26
borei
single-threaded application on the consumer-grade CPU with performance a bit less then 1Gflop on double precision numbers
5:41:28
aeth
Lisp isn't slow, but it doesn't have as many optimizations as a more popular AOT compiled language. Mainly becaue it doesn't have as many humans optimizing it as a more popular AOT compiled language.
5:41:46
aeth
But I suspect a lot of those optimizations are pointless when you're just doing linear algebra.
5:45:29
borei
i got additional and huge motivation for big dive into LISP ecosystem, huge thanks for support !
5:46:45
fiddlerwoaroof_
borei: if you're doing matrixy-stuff and have an Nvidia GPU, you might look into Gabor Melis's libraries, like mgl-mat
5:47:21
fiddlerwoaroof_
I've never got them to work because I don't have the hardware, but they look really nice for machine-learning applications
5:47:25
aeth
k-hos: Well, I think Lisp games would benefit the most from a concurrent, parallel, real-time garbage collector.
5:49:16
borei
fiddlerwoaroof_: actually i have question about ML, can you recommend any materials about ML+Lisp to read (entry level)
5:49:16
aeth
fiddlerwoaroof_: The most important part in the GC buzzwords for games is real time, afaik.
5:49:39
fiddlerwoaroof_
aeth: The built-in java GCs are really advanced, because of the JVM's position in the industry
5:49:48
aeth
Games are usually on some strict schedule (or two, if the rendering framerate is separate from the logic one). e.g. 0.01 second ticks or something.
5:51:52
fiddlerwoaroof_
borei: CL isn't the best yet for modern ML, just because we don't have the libraries that python/scala have. However, Gabor Melis managed to do pretty well using his own libraries
5:57:59
aeth
k-hos: Most games wind up using a garbage collected scripting language in addition to a non-garbage collected engine language.
5:59:33
fiddlerwoaroof_
Hmm, it looks like redhat has recently opensourced a low-latency GC: https://wiki.openjdk.java.net/display/shenandoah/Main
6:00:09
k-hos
personally I would prefer no gc and have some kind of ref counting mechanism built into the language
6:03:54
k-hos
it's pauses and inconsistency that make gcs generally bad for games, but there is little point to arguing about a theoretical language anyway
6:04:18
beach
k-hos: There are very good real-time, parallel, concurrent garbage collectors these days.
6:11:09
aeth
jackdaniel: Yes, because I think there are literally dozens of people on IRC who want one.
6:26:42
aeth
It does look like the (16) array is consistently around 2.8 kilocycles compared to the (4 4) being consistently around 3.8 kilocycles. So that is quite a drop in performance, even though I have to use SBCL's cycles to even notice it at all (0.000 seconds)
6:27:20
aeth
I've played with different ways of expressing it, e.g. pulling out all of the array accesses into variables first
6:29:00
aeth
Maybe the next thing I could try is using a loop instead of doing all of the adds/multiplies,
6:38:35
aeth
It looks like the 4x4 matrix is always slower than the fake 4x4 matrix because SBCL can't access items from it as well. It generates an extra MOV for every one or two MOVSS, which adds up a lot in matrix multiplication. Both on retrieving the values and on setting the destination matrix.