freenode/#lisp - IRC Chatlog
Search
23:35:43
rmrenner
Turns out this question has been raised on stackoverflow: https://stackoverflow.com/questions/28148147/parenscript-and-implicit-return
4:32:51
vtomole
beach: How do you make time for those kinds of things? I always feel like there's too much to learn. It's overwhelming.
4:36:25
vtomole
beach: Ha! The paper mentions that there is difference between a heap and a heap data structure. It has always bothered me how they are the same name..
4:58:47
beach
vtomole: By the way, I was very serious about what I said the other day. If you want to become very good at what you do, then you should begin a systematic activity of learning about all this stuff, like garbage collection, compilation, computer architecture, etc.
4:58:48
beach
I personally think that every software developer should know about these things just in order to avoid making the wrong decisions.
5:00:11
vtomole
beach: I took an architecture class last semester, but thats all the experience I have with it. We studied MIPS.
5:00:45
beach
It may take a decade or so to get up to speed, but then the incremental work to keep it up is not so bad.
5:01:42
beach
Many people avoid locks because they think they are expensive. With SLE, they are much cheaper these days.
5:02:10
vtomole
I can put a decade into this stuff, I'm 20. It's hard for me to read books. I like to start writing code ASAP.
5:03:39
beach
Unless you have direct access to the author, sometimes reading a book or a paper is the only solution. You might as well try to get used to it.
5:04:07
Bike
arm is also pretty complicated, but you probably won't break down sobbing if you try to write an instruction decoder
5:07:22
beach
Goodman was (probably still is) at the university of Auckland when I was visiting there for a year. I audited his advanced architecture class.
5:12:39
beach
That's very hard to say. It varies a lot. And it is not concentrated time. More than a month full time I would say.
5:15:21
vtomole
beach: It's a Common Lisp channel, but i did find that the Go garbage collector has sub-millisecond pauses.
5:16:17
beach
For most applications, that's good enough, yes. Even for applications involving sound, that's usually good enough.
5:18:05
beach
Good question. I don't know. SBCL's garbage collector is not very recent, and it is pretty basic technology from several decades ago.
5:18:44
pjb
(loop for i from 0 for c = nil then (cons nil c) when (zerop (mod i 1000000)) (setf c nil))
5:19:55
beach
vtomole: This paper: http://metamodular.com/sliding-gc.pdf suggests a GC technique that has bounded pauses.
5:27:17
vtomole
pjb: when i run that code "#1=(SETF C NIL) found where keyword expected getting LOOP clause after WHEN current LOOP context: WHEN (ZEROP (MOD I 1000000)) #1#."
6:13:19
beach
(loop repeat 1000000000 for i from 0 for c = nil then (cons nil c) when (zerop (mod i 10000000)) do (setf c nil) (time (gc)))
6:14:24
beach
For that particular load, it looks like the garbage collector takes 0.1 and 0.3 seconds.
6:18:31
beach
Anything with dynamic extent can go on the stack. Whether the SBCL compiler allocates such things on the stack is a different matter. A simple compiler would not bother.
6:21:00
jasom
and you can actually lose mutator throughput due to the low cost of garbage in an sbcl nursery collection by declaring things dynamic extent
6:23:10
beach
vtomole: One more thing you need to know: with a copying collector, the time for a garbage collection is proportional to the size of live data. It does not touch any dead objects.
6:23:29
jasom
vtomole: besides things declared with dynamic extent, in sbcl the stack is used for storing temporary values, call frames, and foreign stack values. On some architectures there are separate control and data stacks in sbcl; x86 does not do this because prior to amd64 it lacked sufficient registers (and even at 16 registers, you get a performance hit for having split stacks)
6:24:48
jasom
the advantage of a split stack is that one of the two stacks (and half the registers) were dedicated to storing and tagged values, while the other was dedicated to storing unboxed values, so you have precise roots for garbage collection.
6:31:32
beach
jasom: You can have precise roots anyway. The compiler "knows" which registers contain tagged values and which ones don't. It is just that the SBCL compiler does not make that information available to the garbage collector.
6:32:10
jasom
beach: right; doing so efficiently seems challenging, particularly in multithreaded situations.
6:33:27
beach
It would be sufficient to have a table that maps values of program counters to a description of register contents.
6:34:51
beach
There ought to be such a mapping from PC values to contents of stack frame and registers.
6:36:34
beach
It has to be true, or else, the code that follows the PC value would occasionally do the wrong thing.
6:38:58
beach
In fact, I am surprised that this hasn't been done a long time ago, and I never see it mentioned when SBCL is being talked about. Instead, I hear the argument about splitting the registers.
6:44:06
jasom
It would be a lot of work; sbcl was doomed by it's history on targets with a lot of registers (I think sparc had the fewest of its pre-x86 targets)
6:47:40
beach
Yes, I remember register windows. For performance, one had better not take advantage of them. :)
6:49:06
jasom
The architecture allows up to 640 64-bit registers according to wikipedia; that would imply a rather large TCB
6:53:28
beach
It is good to know that I am not the only one thinking that SBCL is "doomed" (in that it will be hard to make it evolve according to new architectures, new GC techniques, new generic dispatch techniques, etc). My thinking that is the reason why I started SICL.
6:55:26
jasom
I looked into implementing a relatively simple incremental GC on top of SBCL; it's not ... impossible ... but it's certainly quite challenging, and it wouldn't get you as close to a concurrent one as I would like since a large amount of SBCL's code just disables the GC for a window.
6:56:29
jasom
but any non-stop-the-world, moving, GC breaks the assumption that eq is a pointer comparison
6:58:25
jasom
beach: the point is that sbcl spends a lot of time transitioning in-and-out of safe points, which is bad for low-pause GCs
6:58:43
beach
JuanDaugherty: LLVM is essentially a C virtual machine. drmeister is having a number of problem making it work for a totally different language.
7:00:01
jasom
JuanDaugherty: llvm doesn't solve any of the GC issues (though it is much less GC hostile than it was when drmeister started)
7:00:08
beach
JuanDaugherty: For example, LLVM can not move code, so Clasp can not garbage collect functions. When a new function is defined, the code just grows.
7:01:45
JuanDaugherty
i will be surprised if it inhibits zero address arithmetic but life is full of quelle surprise
7:02:24
jasom
If you generate PIC/PID and are willing to suffer one level of redirection for all procedure calls, that *should* be sufficient
7:03:28
jasom
actually llvm ought not generate its own data on any non-arm target (lots of ARM compilers put immediate values alongside code and access them with pc-relative loads)
7:04:41
jasom
but there is a probably performance hit for that vs. having the moving gc apply relocations; position-independent code is often slower, as is the extra level of indirection for function calls.
7:18:14
beach
Why would position-independent code be slower? I know it is in the context of C and dynamic libraries, but in the context of Common Lisp?
7:18:49
beach
Also, the indirection for function calls is probably intrinsic to a dynamic language like Common Lisp. I don't see a way to get around it.
7:22:17
beach
Speaking of which, it is interesting to see how people choose a language such as C or C++ for reasons of performance, but over time, things have gotten more complicated. In particular, accessing a global variable used to be very fast, but now you have to go through the GOT, using PC-relative addressing, etc.
7:25:41
White_Flame
you can avoid indirection for function calls if you halt the world, patch all references, and resume whenever a function is changed
7:27:19
White_Flame
well, functions don't usually change often once an application is deployed & running, so if the function call overhead really is an issue (spoiler: it probably isn't) it could in theory work well
7:27:55
White_Flame
heck, 100% indirect function calling was used on the PS2 in whichever lisp-based game that was
7:29:35
White_Flame
jasom: position-independent code is often _smaller_, because it does not need absolute 64-bit references to point to things
7:37:16
beach
White_Flame: I wouldn't think there would be many such literals in a Common Lisp program, would there? I know SBCL goes to great length to make NIL (and maybe T as well) literals like that, but other literals I can think of would more likely be part of the instruction instead. Am I wrong?
7:38:18
White_Flame
but yeah, I guess most of the stuff would just be heap pointers to data, not local pointers
7:39:49
White_Flame
(defun x () '(1 2 3)) does disassemble down to MOV RDX, [RIP-86], so it is position-independent local data
7:40:12
White_Flame
looks like it assembles into a 32-bit signed offset, still savings over a 64-bit heap pointer
7:59:26
beach
Hmm, what if the list contains (say) symbols? Wouldn't the GC have to modify the list of those symbols move?
8:51:09
Posterdati
Please is there any sufficient "well done" documentation for cl prometheus??? Thanks!
9:13:14
pjb
beach: functions that are called in the same compilation-unit and that are not declared not-inline don't have to go thru indirections.
9:14:05
White_Flame
hence why it tends to be much safer to have SLIME recompile an entire buffer than doing piecemeal evaluations
9:15:10
pjb
Well, I would like the implementation to optimize only with some optimization level (like speed>1 and debug=0).
9:15:44
pjb
Also, compilation-units may encompass multiple files (but I don't think ASDF let you specify that).