libera/commonlisp - IRC Chatlog
Search
7:17:34
Shinmera
there's probably thousands if not millions of productivity lost from people constantly bemoaning and arguing this, too.
7:35:23
hayley
Somewhat late to the joke, but if Picolisp ran on 32-bit ARM CPUs, we could have corruption due to type errors on embedded devices too, I think.
7:38:23
hayley
Would call it the C of the Lisp family, but very clever people want to invent something even closer to C still, so I can't joke about that.
7:45:34
lisp123
Alfr: the scale is much lower (number of complainers vs. number of programmers not using CL)
7:48:21
lisp123
On another topic, if anybody is interested in implementing the TeX program in CL, I'm willing to sponsor its development to a certain level
7:48:58
lisp123
Send me a private message if interested. There is a C source code which is probably easier than reading the book (sorry D Knuth)
8:25:04
hayley
I don't. Best ask him in #lispcafe (or ask him to fix his IRC client, so that he can join other rooms).
8:27:24
hayley
It would be nicer to have our discussions on automata in #one-more-re-nightmare rather than scattered between #lispcafe and private messages, too.
8:28:38
hayley
(Now who the hell cloned my repository 96 times per day for the past week? If that's what happens when I submit it to Ultralisp...)
8:34:14
hayley
(That would appear to be the case, as Ultralisp does not detect any of the libraries in the "Telekons" organisation, so I had to input the URL manually, and the site warns "project will be updated only by cron" if one inputs the URL. Darnit)
8:40:12
phantomics
lisp123: As much disgust as I have for many mainstream technologies and the industry surrounding them, some of their dysfunction may have helped us avoid a vastly worse situation than we have now
8:41:20
phantomics
For example, Microsoft's efforts to dominate the computer world were predicated on ubiquitous open-spec hardware. If the PC had failed, all consumer-purchasable computing devices might have ended up being locked-down iOS-like ecosystems
8:46:06
phantomics
And the dysfunction of the Unix-model OSes helped to propel the FOSS movement. If Symbolics had won the desktop computing race and released a near-perfect but proprietary Lisp machine, they might have been bought by IBM with their LispM used as the basis of a centrally-controlled walled garden with software of sufficient reliability that no one would be able to justify attempting to compete with it, leading to perpetual IBM control
8:59:51
lisp123
phantomics: Hard to say where the world would end up...but hopefully in the future it ends up in a better place (although all the lock-downs of technology might make that more difficult)
9:14:57
pjb
phantomics: perhaps. I'd move to make more simulation before time travelling to perform the change.
9:17:04
contrapunctus
lisp123: I'd rather have a word processor where 1. (like LaTeX) users work with semantically structured documents 2. most users don't need to fiddle with the layout, it's handled for them 3. programmatic creation of new data types is easy, but 4. (unlike LaTeX, but like CL) there's no edit-compile-refresh cycle.
9:20:28
lisp123
Still a while away, but its mostly in-browser with a custom text editor (since I was not happy with the offerrings like ProseMirror / AceEditor / etc.)
9:22:22
lisp123
I will open source parts of it later, it should have a lot of the features of Emacs (naturally, given I use Emacs so much)
9:23:36
lisp123
You may want to look at ProseMirror if you are into these things - it has probably the best implementation in-browser and you can do a lot of what you wrote above
9:29:36
lisp123
LispWorks Editor (and maybe Emacs, but I'm not sure) does it in an interesting way - the content is a flat stream of characters, and then there is a concept of 'property regions' (not sure if correct terminology)
9:30:00
lisp123
where properties are marked against certain points in the stream (e.g. from position 4 to 10, add property bold)
9:30:30
lisp123
Naturally then you have to modify the properties region every time you insert/delete text
9:33:33
lisp123
Programmatically, I prefer the LW approach (although I actually do something different, for my particular needs), its easier to manipulate the buffer under that approach vs. having to go in and out of nodes
9:40:18
lisp123
So if you have <b>sometext<i> and this</i></b> -> you can see how property regions make life easier in modifying the properties
9:40:55
lisp123
contrapunctus: the flip side is that tree-based approaches allow for more semantic meaning - you can traverse down the tree for example
12:23:43
hayley
Currently reading Mark Stuart Johnstone's thesis (supervised by Paul Wilson, for those playing along at home) for which he estimates "We think it is likely that the widespread use of poor allocators incurs a loss of main and cache memory (and CPU cycles) of over a billion and a half US dollars worldwide per year" in 1997. lisp123's prior estimate of productivity lost to not using Common Lisp seems quite small in comparison.
12:31:28
beach
Heh, that kind of calculation sounds familiar. I heard it a lot when I spent the year with them.
12:32:13
hayley
It would appear the copy at <https://www.cs.utexas.edu/ftp/techreports/tr97-29.pdf> is cut short though.
12:32:39
beach
Paul Wilson also created a system for compressed memory where paging was done in two levels. The first level was compression and the second was to disk. He did a similar estimate for how much RAM his system would save.
12:33:22
Nilby
hayley: I'm sure that's likely. I think the losses due to many other software practices are even more staggering. But sadly, in practice sbcl hogs more unused memory on my system then even a browser.
12:33:54
hayley
Guess I have to go through my university for access. "Your library or institution may give you access to the complete full text for this document in ProQuest." Yes, that's why I went on ProQuest, thanks.
12:35:28
hayley
Nilby: I don't think I would be able to reproduce that, without configuring SBCL to collect quite infrequently. But some have wanted SBCL to collect more frequently, to reduce the amount of floating garbage.
12:36:34
Nilby
Unfortunately, I have to set dynamic space to physical memory to prevent hard crashes when running out.
12:39:50
beach
As I recall, any system of automatic memory management needs quite a lot of additional memory, i.e., way more than 6%, so that the collector won't be triggered too often.
12:42:17
Nilby
beach: yes, technically i have like 1000's % overhead, but the active pages are the only thing that causes trouble
12:42:20
hayley
The time taken in garbage collection is inversely proportional to the space overhead allowed for floating garbage, so in a way no particular value is needed. But 100% to 200% space overhead is common.
12:43:50
Nilby
unfortunately, it has little to do with real program memory usage, it's just to prevent fatal crashes before "really" running out of memory
12:47:03
hayley
If your maximum heap size is much larger than the memory used, even after including space overhead, it is quite likely most of the heap has no physical memory mapped; last I checked, SBCL does unmap unused pages when possible.
12:49:48
hayley
...right around https://github.com/sbcl/sbcl/blob/master/src/runtime/gencgc.c#L4698-L4706
12:50:14
Nilby
hayley: right, i'm only really concerned with mapped and recently used memory, which the os considers "resident"
12:51:19
hayley
Then your resident memory usage should be somewhere between the size of all live objects in your program, and the maximum heap size.
12:54:12
hayley
Hm, maybe beach is referring to how much memory is needed to do a copying collection in the worst case (that nearly all objects survive and need to be copied). The worst case would require 100% space overhead, but the worst case doesn't happen too often. And generational collection tends to reduce the amount of memory that must be copied at a time, too.
12:55:46
beach
What is the point of unmapping pages. Is it to avoid that they migrate to secondary memory?
12:56:19
hayley
I can't reproduce that. After loading McCLIM, I see 110MB (nitpick: b for bits, B for bytes) of resident memory, and 1212MB of virtual memory.
12:57:02
Nilby
unmapping probably doesn't make that much of a practical diffence, it just makes the o/s's job a little easier
12:58:00
Nilby
when you multiply the practical overhead of about 6-10% of physical memory by 50-100 processes, it's pretty bad, when it's mostly unused.
12:58:07
beach
Yes, so it doesn't have to migrate it to secondary memory. Is there any other reason?
12:58:57
hayley
For comparison, HotSpot is much more lazy about unmapping c.f. <https://shipilev.net/jvm/anatomy-quarks/21-heap-uncommit/>.
13:01:41
hayley
beach: I think not having to migrate garbage pages to secondary memory is enough of a reason. At least, there is a similar problem when using bump-allocation with cache memory; even though everything after the allocation pointer is garbage, attempting to allocate will require garbage to be pulled into cache for seemingly no reason.
13:02:41
hayley
Cliff Click claimed that avoiding the latter phenomenon, by using special instructions in the Azul hardware, reduced the memory bandwidth of Java programs by 30% or so.
13:03:42
hayley
Nilby: In the same conversation with David Moon and Dan Weinreb, Cliff also stated that "swapping is death for GC."
13:05:04
Nilby
yes, old lisps, and lisps on older hardware (like azul) i think were more careful with that
13:05:34
hayley
<https://web.archive.org/web/20100302033837/http://blogs.azulsystems.com/cliff/2008/11/a-brief-conversation-with-david-moon.html> is a fascinating read still.
13:07:53
Nilby
i think it would just be lovely if someone made memory with room for tag bits. you would think current ecc memory could do it, but of course the architecture would have to be modded
13:12:34
Nilby
well, masking out tag bits is not without cost, but also knowing you always have those bits, you can omit a number of things that lisp has to do that, say C, doesn't
13:13:42
beach
Nilby: It is rare that you actually have to mask out the tag bits in current systems.
13:15:16
beach
Nilby: If the tag 0 is used for fixnums, then addition still works. And in many architectures, it is possible to include the tag as a small constant offset in memory operations.
13:17:35
hayley
I believe it is quite rare. On many processors a constant offset can be added to an address when performing loads and stores to memory. Suppose we have a CONS tag of 7 (as in SBCL), then the instruction for CAR needs to look like Ra <- load (Rb - 7). Similarly CDR adds 1 (8 bytes offset - 7 byte tag).
13:19:05
hayley
Though I have seen SBCL being less clever than it could be, and repeatedly unboxing array indices sometimes.
13:20:15
Nilby
well, right now i only have one compiler that's fast enough, and y'all know which one it is, and you can check for yourself, and especially try comparing to the output of gcc/llvm
13:25:50
hayley
gilberth and I came up with the idea to put parallel type-checking (and auto-increment) hardware on an old microprocessor, to see if we could make a low-budget "Lisp machine" in some useful sense of the term.
13:28:38
hayley
Depending on the sort of tag check, though, it is likely that most do not slow down the program much. A branch predictor can easily predict that type checks will not fail, and a superscalar processor can then run the (probably unnecessary) check in parallel with other code.
13:30:12
Nilby
hayley: I think a small set of mods to current archs could be worthwhile. one does't have to go "whole hog" like the lisp machines. just making the typical lisp function call take less ops, and maybe mild type/tag things, would be great.
13:32:00
Nilby
current toolchain compilers do so many freakish optimizations, it's weird to see what they generate
13:32:09
hayley
My wishlist for hardware features is quite similar to what Azul did: a read barrier in hardware, and hardware transactional memory. Everything else can be Sufficiently Smart Compiler-ed.
13:34:44
hayley
(It is also worth mentioning that, in my domain, some unboxing ops are inconsequential compared to having to "interpret" the matching automaton. So I still win despite the unboxing overhead.)
13:42:26
Nilby
yes, even the increasing use of llvm as library, can't really be as nice as cl:compile
13:46:24
hayley
If you want some cool architectural changes I'd recommend reading <https://dl.acm.org/doi/10.1145/3297858.3304006>