freenode/#lisp - IRC Chatlog
Search
23:08:42
jcowan
Compilers are "automatic programming", one of the first AI fields. In fact, so are assemblers.
23:48:55
jasom
has anyone tried writing an ASDF plugin that lets you specify a particular lisp file should be loaded with a particular read-table?
23:50:42
jasom
It seems obvious enough that *someone* must have looked into it. I was just mulling over racket's language modules and considering how I might implement something similar in CL.
23:53:32
scymtym
jasom: fare's syntax-control (i think that was the name) branch would probably be good starting point
23:54:41
jasom
scymtym: that branch had different purposes; I was thinking of just defining a new component type
23:55:39
jcowan
IMO TRT is to allow specifying the version of read that the compiler/loader/eval uses
0:04:47
aeth
jcowan: That example actually survived in the HyperSpec. http://www.lispworks.com/documentation/HyperSpec/Body/m_prog_.htm
0:44:03
no-defun-allowed
I wouldn't trust shit on codeproject tbh. People who call themselves coders are probably losers.
0:47:13
aeth
LdBeth: There's an argument there, it's just that the author doesn't know Lisp. e.g. ":type float" is probably not what you'd ever want to use since it's basically (or short-float single-float double-float long-float) and probably won't get you anything over ":type number" (or maybe even ":type T"!). If you're going to use a float in a struct, you probably want a specific type of float. The author probably meant "single-float" and was think
0:50:22
aeth
No surprise that the author primarily knows Clojure. Clojurists seem to have the wrong idea of how Common Lisp works but seem to think that they know how Common Lisp works.
0:52:28
aeth
And, yes, the author has no mention of specialized arrays as an option (or multiple return values!), which is strange.
0:53:52
aeth
(I didn't mean to insult Clojure programmers, although in hindsight I see how it could seem this way... let me just put it differently: I know I don't know Clojure, so I don't blog about Clojure's weaknesses. Plenty of Clojurists who don't know CL seem to blog about CL's weaknesses.)
0:54:43
LdBeth
aeth: I didn't noticed the type thing. But I mean, tuple is more close to vector and any saner (Common) Lisper would use vector ranther than list for it
0:55:44
aeth
It's pretty strange that the author argues that Python, Ruby, and JavaScript are more popular than Lisp because Common Lispers don't use statically typed structs very often.
0:57:59
aeth
LdBeth: There is a point that could be made in a similar essay. There is a long tradition in Lisp of people using lists when they are completely and totally inappropriate. e.g. 2.2.4 in the famous Worse is Better essay. http://dreamsongs.com/WIB.html
1:01:10
LdBeth
aeth: It is sometimes understandable because the writer wants it to be comaptble with other dialects, especially for someone had used MacLisp or InterLisp for a while
1:07:20
aeth
This means that he tested it in something other than SBCL, one which doesn't do type-checking in the struct constructor. I'm betting CLISP.
1:13:56
jcowan
I'd say he thinks that because he declared the slots to be floats, he assumes the constructor will cast its arguments to floats
1:14:33
aeth
jcowan: But then he prints it out, which should then make that assumption very clearly disproven
1:15:44
oni-on-ion
are they ever out of order? i wouldnt assume any of them optional given the name 'vec3d'
1:16:52
aeth
oni-on-ion: There are two common approaches for this sort of thing (vec3s). One is to use a specialized array, and the other is to use defstruct to define a specialized array rather than to define a struct. The result is basically the same, except the latter automatically creates accessors (but not a type, which is strange because defstruct normally defines a type and the type can still be useful if it's a specialized array)
1:18:23
aeth
oni-on-ion: I use a macro to make my own specialized array instead of using defstruct. I call it define-array-and-type: https://gitlab.com/zombie-raptor/zombie-raptor/blob/b90f23cf6168f892fce8fd980649eaf882662acb/util/array.lisp#L55-67
1:19:04
aeth
e.g. (define-array-and-type vec3 (single-float '(3)) (x y z)) and then (vec3 1f0 2f0 3f0)
1:19:23
jcowan
Tuples make sense in the context of destructuring-bind, but without such things, not so much
1:19:57
jcowan
The point of a tuple is that it is both heterogeneously typed (in the sense that every object in it has a known type) and indexable.
1:23:43
aeth
You could, if you assume tuples to be immutable, use a macro on top of defstruct to build tuples for CL since defstruct also has :read-only for each slot (just set every slot to read-only). Of course, the compiler won't optimize it because it's not a common idiom.
1:23:47
jcowan
I have been seeing in my current job how people who have never used dynamically typed languages are actually afraid of code written in them, and I think this is an example.
1:27:21
cgay
I'm not that familiar with defstruct. Can you really use it to implement tuples, which have unbounded length?
1:30:00
jcowan
Terminology is loose, but most people talk about tuples as being of a particular length: there are pairs, triples, quadruples, ....
1:30:10
aeth
cgay: Hmm... If you need unbounded length tuples, you could create the tuple on demand with deftuple if it doesn't currently exist.
1:30:59
jcowan
The result is nominally rather than structurally typed, but that really is a fine detail.
1:31:25
jcowan
"tuple" doesn't necessarily mean "immutable tuple"; in Python it does, but not everywhere.
1:34:01
jcowan
in Maclisp hunks were mutable in content, immutable in size, and had to be a power of 2.
1:34:10
cgay
Interesting because just yesterday I was noticing that in Dylan <stretchy-collection> is not a subclass of <immutable-collection>.
1:35:44
cgay
Interesting because just yesterday I was noticing that in Dylan <stretchy-collection> is not a subclass of <mutable-collection>.
2:24:18
HighMemoryDaemon
Does most people's Lisp programming session (At least Emacs users) start by doing the follow? : Open Emacs, Open your project's ASD file, M-x slime, C-c C-c l (load file to Slime), (cl:quickload 'my-project)
2:36:22
HighMemoryDaemon
oni-on-ion: You don't need to 'C-c C-c l' the ASD file if you are in the folder that contains all your projects?
2:41:53
HighMemoryDaemon
Very nice, so changing directories with eshell wouldn't even matter since QuickLisp looks in the local-projects folder by default.
3:22:55
drmeister
I gave two talks in the last two days in the Bay Area with “Common Lisp” in the title.
3:25:09
drmeister
I think I’ll use the term “flexible compiler for a compiled dynamic language” in the future.
3:33:53
no-defun-allowed
i don't think FOSS writers do much CUDA, it's a proprietary language and only useful on nvidia machines unfortunately
3:39:36
no-defun-allowed
I am looking to make cl-vep use GPGPU, but it'll probably be in OpenCL/oclcl. The interfaces for oclcl and cl-cuda are fairly similar so it could be possible to port it anyway.
3:51:34
no-defun-allowed
funny you bring it up, beach and i have been poking at (presumably) amdgpu crashing and halting our computers
4:05:33
no-defun-allowed
he's not working on it, but we've both had problems with it freezing recently
4:06:16
no-defun-allowed
nothing happens before it freezes, but then it becomes unusable and doesn't respond to the usual TTY changes, SysRq keys, etc
4:06:42
no-defun-allowed
my motherboard has lights which indicate when something went terribly wrong, and the GPU one lights up when that happens though
8:54:44
makomo
does anyone know of anything like MAP-INTO, but that would additionally let me specify the maximum number of elements i want to map over
8:55:16
makomo
i know the destination is an array, but the source can be *any* sequence, which is why i turned to MAP-INTO, instead of TYPECASE/LOOP
8:55:43
makomo
and i want to COERCE every element from the source to a specific type before assigning it to its corresponding destination element
9:07:09
beach
(block foo (let ((counter 0)) (map-into .. (lambda (destination source) (when (> (incf counter) max) (return-from foo ...) ...)) ...)))
9:07:27
makomo
oh wait, MAP-INTO has the same behavior regarding multiple sequences as MAPCAR, etc. (stops at the shortest sequence) and it **includes** the destination sequence
9:07:46
makomo
for some reason i thought it wouldn't take the destination sequence's length into account
9:09:01
makomo
beach: pretty much, but i'm trying to understand what you wrote, or rather, the way you interpreted my question
9:09:41
beach
I thought you max number of elements processed was unrelated to the length of either sequence.
9:10:46
makomo
beach: in that case, my max would still have to lower than either of the lengths, right? otherwise MAP-INTO would stop early
9:13:03
makomo
i went to see what ITERATE has, but it uses ELT, which wouldn't be too good for lists
9:19:42
makomo
beach: in this case my max is the length of the shorter sequence, and that is already handled by MAP-INTO nicely
9:22:16
makomo
beach: one thing i'm wondering though. i wanted to delegate the typecase i would have to write to MAP-INTO. now, it would be the best to do the typecheck only once, and then blast through the sequence knowing what the type is (instead of calling ELT for example which has to do the typecheck every time). is this a correct assumption i'm making, that MAP-INTO usually does such a thing and that ELT has the
9:23:20
makomo
(how they're implement is an implementation detail in general, but would this be the logical thing to do? could a compiler try to optimize an ELT call if it knew the type of the sequence?)
9:24:07
makomo
and i just typo'd the correction as well, oh dear. s/implement/implemented/ instead of the above
9:26:13
makomo
ggole: right and what about calling ELT on an array (instead of AREF for example)? ELT in general has to figure out the type of the sequence first, and it would in general have to do that for every access. could a compiler optimize that into an AREF directly for example?
9:32:21
pjb
makomo: beach: in those cases, I would use a displaced array: (map-into r (lambda (s1 s2) …) (make-array (- end1 start1) :displaced-to v1 :displacement-offset start1) (make-array (- end2 start2) :displaced-to v2 :displacement-offset start2))
9:33:46
beach
makomo: The compiler would have to replicate the loop for that optimization to take place.
9:35:44
beach
If MAP-INTO dispatches on the type of the sequence(s), then the loop is already replicated. But given that MAP-INTO takes several sequences, that might become complicated to pull off.
9:37:08
jcowan
Inline caching is designed to deal with exactly this problem, based on the fact that at a particular call site, typically only one or a few of the types that a procedure can accept are actually ever passed to it.
9:38:22
jcowan
Combined with dynamic recompiling, this allows a monomorphic call site to be maximally efficient, making the simplest possible test: is this the type we had before? Mildly polymorphic call sites can handle a few types with type dispatching; only intensely polymorphic sites need a full dispatch.
9:39:01
no-defun-allowed
does anyone know of any alternatives to neural networks for machine learning?
9:39:04
jcowan
In this way a procedure like elt can be compiled in two forms, one for lists and one for vectors, with the correct one being called based on call-site dispatching.
9:40:24
makomo
beach: yes, that's exactly what i was thinking about -- just like i would have to do if i was going to do it manually
9:42:03
makomo
beach: hm yeah, since MAP-INTO takes a variable number of lists, can it even be done in general? would you have to resort to run-time compilation or something?
9:43:14
jcowan
And the difference between most-efficient list iteration and vector iteration is drastic.
9:43:39
pjb
For lists, you can easily use nth-cdr to start from where you want, and count on the length of other sequences to stop it.
9:45:04
makomo
jcowan: that's very interesting. do any implementations do that (for any function, not necessarily ELT and the like)?
9:45:46
jcowan
No idea. Historically the idea arose in the Smalltalk community and has been used for other OO languages as well.
9:46:15
makomo
i also read about SynthesisOS, an OS that would recompile parts of itself on the fly to make itself more efficient
9:47:52
jcowan
https://gist.github.com/twokul/9501770 explains its use in JavaScript, a language without overt classes.
9:48:57
jcowan
ggole: If you are JIT compiling anyway, inline caching isn't that much more expensive.
9:52:04
pjb
There was an OO system that would also recompile the methods specially for each instance (thus duplicating the code), letting use relative addressing and other tricks, since they were directly attached to the object data.
9:55:08
heisig
jcowan: Thank you for sharing that. It could be interesting to use inline caching to bypass the discriminating function in CLOS. Though ggole is right, there are certainly some hairy corner cases.
9:55:55
beach
makomo: If I were to write map-into, I would probably define a few special cases, like at most three sequences, and replicate the loop for those.
9:58:26
pjb
ggole: since relative addressing use shorter opcodes, the methods were smaller and faster, so it was worth the duplication.
10:01:36
heisig
jcowan: But how does inline caching preserve what Alan Kay calls "extreme late binding of all things"? I would think there must be at least two conditional branches: One to check whether the types are the same, and one to check that no metaobjects have changed.
10:02:44
jcowan
Yes, that's true. But such a check is still cheaper than a full dispatch. You can put the check at the head of the method, and provide two returns, a successful return in which the method ran, and a failure return which triggers recompiling
10:03:05
pjb
This is the advantage of having the compiler available at run-time: you can recompile and optimize for the current data.
10:03:37
ggole
Usually a check would not be on the type, but a token that represents all the assumptions that you wished to optimise based upon
10:03:54
ggole
If something invalidates those assumptions, you update the token and you no longer pass the check.
10:05:12
ggole
For example, you might want to assume that a global variable has not been written to, because you make use of its current value in some way.
10:07:04
jcowan
AFAIK the first JIT compiler was HP APL back in 1977, which compiled APL line by line. The code for each line involved a check that the rank and shape of every variable referenced in the line was the same as before, and the line was compiled to a set of nested loops with fixed endpoints (so-called "hard compilation"). If a line failed the check, it was compiled again, this time assuming only rank constancy ("soft compilation").
10:07:57
jcowan
So since variables polymorphic in rank are not common in any language, most lines were compiled once or at most twice and then ran almost at full AOT speed while keeping the dynamic nature of APL.
10:41:06
heisig
jcowan: Thank you for posting that HP Journal! This dynamic APL compiler is definitely something I should mention in my thesis.