freenode/#lisp - IRC Chatlog
Search
4:03:59
aeth
loke: If it was written in any way like how I write CL it probably went like this: "This is ugly but I'll write it in a fairly straightforward way so I can refactor it, perhaps with a nice macro, later." Except the next day, it turns out it wasn't very straightforward at all and it was probably written at the very end of the day or something. And "the next day" might be two years later because big applications are big.
4:04:53
loke
aeth: Possibly. But also remember that the Maxima was started in the 60's, and this is part of the function definition code which is likely that old.
4:05:29
aeth
(Of course, today's compilers probably would be more efficient if you wrote it in a more straightforward way.)
4:06:42
aeth
loke: Assuming defmacro dates to the 1960s, the thought process was probably very similar. If anything, they probably relied more on macros in old Lisp than today's Lisp.
4:08:44
aeth
Looks like it's a "function visible from Macsyma". https://www.cliki.net/Naming+conventions
9:35:35
scymtym
slyrus: i pushed an initial version of cxml with a few improvements to https://github.com/sharplispers/cxml
10:57:46
phenoble
I am working through Seibel's `Practical Common Lisp`, and just encountered behaviour using SBCL different from the book. I wonder why this is.
10:58:31
phenoble
So, Seibel explains that appending lists using append does not append copies of the passed lists, but essentially re-uses their memory.
10:58:46
phenoble
Such that, if any of those lists is changed afterwards using e.g. setf, the appended ...
11:00:22
phenoble
(progn (defparameter x (list 1 2)) (defparameter y (list 3 4)) (defparameter z (append x y)) (setf (first y) 20) z)
11:03:33
beach
phenoble: Well, obviously, if you re-evaluate the first defparameters, then you get new values of x and y.
11:05:04
phenoble
beach: yes, I am trying to recreate what I did before... entering these commands one after the other, does infact also lead to (1 2 20 4).
11:16:23
phenoble
Related question: is the rationale for having append behave in this way performance considerations that come to play in scenarios where the last element contains another cons cell (that links to another cons cell, ..and so on)?
11:17:05
phenoble
Because why else would one construct append to behave in this (otherwise odd?) way, I suppose?
11:17:29
phenoble
Asking because I am still a little uncertain regarding these list structures lisps use internally..
11:18:05
TMA
phenoble: APPEND is very old. by sharing as much as possible, you conserve memory (which was neither plentiful nor cheap those days)
11:20:05
TMA
phenoble: absent mutation, there is no natural way to tell, whether the last list is shared or not.
11:20:11
phenoble
Yes, but apparently things are shared only then, when dealing with "cons cells lists" I suppose (the way I use that term might reveal my lack of knowledge, sorry). Because in a scenario of e.g. (append (list ..) (list ..) (list ..)), only the last element is shared - which does not seem efficient.
11:21:44
TMA
phenoble: [[well, I am lying a bit. you can compare the conses for EQ, but from the point of what the list _contains_ there is no difference]]
11:22:57
TMA
phenoble: in general, you cannot share the non-last lists, because that would entail modyfying them
11:24:37
TMA
phenoble: NCONC does the modification. given your definitions of X Y, try (progn (defparameter q (nconc x y)) (values x y q))
11:25:43
phenoble
TMA: yes, I am starting to understand the details I think. This is essentially about how to deal with linked lists in different ways.
11:29:20
TMA
phenoble: drawing boxes helped me; this kind of boxes: http://d2vlcm61l7u1fs.cloudfront.net/media%2Ffda%2Ffda36e53-c6d1-47c8-88b7-9a418d0f7e84%2FphpFNhftT.png
11:38:17
phenoble
hmm, when I pass a "list" as an argument to a function, I'd assume that only the first cons-cell is copied. Is that correct?
11:39:42
phenoble
Though I would be surprised if that was not true, because from that it'd follow that the whole list is copied, would it not?
11:41:10
phenoble
mhn, I miss C++'s verbosity concerning these matters (pointers,references,lvalues,rvalues) - its explicit use of these concepts does make it explicit what is happening
11:50:50
TMA
phenoble: there is no pass-by-value -- everything is passed by reference (well, the implementation is free to do as it pleases, as always)
11:54:48
phenoble
TMA: I am starting to see why lisp is referred to as much as it in the context of functional programming. It's a good fit for it w.r.t. performance.
12:49:16
beach
phenoble: Common Lisp uses what I call "uniform reference semantics" which means that every object is manipulated through a reference to it. The calling convention uses call-by-value uniformly, in that arguments are evaluated before they are passed to a function, but the value obtained is a reference.
12:50:57
beach
This convention turns out to be the only sane one in a language with automatic memory management. It is much faster than what is possible in a language such as C++, which is why I frequently say that "it is possible to write a C++ program that is both fast and modular".
12:54:09
beach
To quote Paul Wilson: "liveness is a global property". So, in a C++ program, to make sure that you know the number of references to an object, you must either 1. break modularity so that you know it that way 2. introduce reference counters which makes things orders of magnitude slower, or 3. always copy objects so that you know that each one has a single reference, which is also disastrous for performance.
12:56:52
jackdaniel
beach: that was a joke. A: XXX *is* good. B: <hurrs with preparations to use XXX>. A: is NOT*. B: <halts the preparations>.
13:05:46
LdBeth
How does it determine the font name? Seems it’s neither by postscript name nor file name of TTF/OTF font.
13:07:31
shka
well, numbers probabbly are copied, but since they are inmutable anyway it makes zero difference
13:08:17
Bike
nothing is explicitly copied with respect to eql, which is all you care about most of the time
13:09:38
shka
anyway, manual memory managment is difficult, boring and basicly: https://dannydainton.files.wordpress.com/2017/06/angtft.jpg?w=636
13:12:07
Bike
well, C++ copies a lot because it puts things on the stack, so like if you return a complicated object from a function, it has to be copied
13:12:22
Bike
and it puts things on the stack because there's no way for it to put them on the heap itself
13:14:37
Bike
eq is in kind of a weird semantic place because it can distinguish objects that nothing else in the language does, in implementation dependent ways
13:14:52
beach
shka: "Nothing is ever implicitly copied" is what is known as a "pedagogical lie". And "Uniform reference semantics" has an emphasis on "semantics", i.e. it is AS IF every object is manipulated through a reference, simply because there is no portable way a programmer can determine whether it is true or not.
13:18:11
shka
anyway, I was looking for small forth interpreter of FORTH written in easy to understand CL code
14:13:57
phenoble
beach: Just reading your comments on our earlier discussion - thanks. I'll definitely keep your reference to memory management in mind in further study.
14:15:17
phenoble
beach: About C++ performance and modularity, though, I'm not sure I see the connection. I see C++ to be so flexible that you can essentially do everything you want - but at the price of complexity in using that... let's say, well-performing and modular thing, you've created.
14:16:50
beach
phoe: The problem with any language without automatic memory management is that you can't know what module is keeping references to your objects when you pass an object to such a module.
14:17:04
phenoble
beach: using smart-pointers that introduce some book-keeping logic via reference-counting mechanisms is not per-se slow
14:18:27
phenoble
beach: ah, you're discussing this still in the context of automatic memory management
14:20:20
beach
Or rather, you do have a choice, which is to break all modularity so that you know whether a module keeps a reference to your object.
14:20:27
phenoble
beach: Well, you could devise your own scheme of making sure that memory gets deallocated at the appropriate times I suppose.
14:22:09
phenoble
beach: ok, I'm not all that deep into language design under these aspects (modularity?). I can neither speak nor think with authority on this. You win :-).
14:22:13
beach
phenoble: But apparently C++ programmers are being lied to. They think they now have garbage collection in their language, and they think the compiler can generate fast code. Since they don't compare with anything else, they believe it.
14:28:05
TMA
beach: to be fair, there is a sentence in the standard, that says basically 'if you do this, you will probably break garbage collector (if you happen to use a implementation that provides it)'
14:32:14
minion
There are multiple help modules. Try ``/msg minion help kind'', where kind is one of: "lookups", "helping others", "adding terms", "aliasing terms", "forgetting", "memos", "avoiding memos", "nicknames", "goodies", "eliza", "advice", "apropos", "acronyms".
14:36:52
beach
phenoble: So let me just say one more thing. When I program an application in C or C++ (which I haven't done for some time), I use pointers for everything, so that I get uniform reference semantics, and I stick in the Boehm etc. garbage collector so that I don't have to worry about freeing objects that are dead.