freenode/#lisp - IRC Chatlog
Search
14:01:41
Xach
I know I've thought more than once "how can something so small be useful?" and it usually is very, very useful.
14:03:37
loke
Hello Xach. I was about to bug you about 1530, but I noted it's been taken care of. :-)
14:03:43
jmercouris
I know I'm doing something wrong, but I'm trying to intern a symbol in the keyword package, from a string
14:04:00
beach
kaun: If you need help, all you have to do is ask. Common Lisp is not mainly meant for fledgling hobbyists, so I can see why they would have problems initially, especially if they expect a toy language.
14:04:24
Xach
jmercouris: as you know, ":" is not part of the symbol name, but a marker between the package name and the symbol name.
14:06:14
TRS-80
in fact jmercouris issue looks almost exactly like the one I'm wrestling with presently
14:06:50
kaun
beach: the problem is mainstream languages set expectations toy-language level, you need to gain perspective after learning a bit of CL to realize your hobbies can afford to be a lot more ambitious.
14:07:21
TRS-80
beach: there are packages in Emacs, yes. And a package manager (package.el) And repositories (ELPA, Org, MELPA, etc.). Not sure if that answer your question?
14:08:31
beach
TRS-80: A package in Common Lisp is a mapping from symbols to objects. It is used to control access to "modules".
14:11:25
TRS-80
I seem to be having very similar list related issue (well maybe not, I could be misunderstanding) but since I just had it working prior to latest step I feel like I am missing something simple, some ( or backtick or something. Anyway here it is: https://paste.pound-python.org/show/ue52FvKMIeqWyF8vBmTq/ Line 2 is what worked before, but line 7 now is not working. :/
14:33:17
jmercouris
TRS-80: this is not an elisp question, this is an emacs org-mode configuraiton question, you should ask on #emacs, there are many many users present
15:12:10
antoszka
so the TERMINFO package might require an update as it only seems to support a single magic type:
15:14:50
beach
I wrote some code for testing the Doug Lea style memory allocator that I wrote the other day. The testing code submits random sequences of requests for allocating or freeing chunks. I use a Markov process so that it is likely that there are long-ish sequences of allocations and long-ish sequences of frees.
15:14:52
beach
Memory is simulated, so each "memory" access takes a long time. I started a test with 10 million operations and it has been running for several hours. By now it has completed almost 3 million operations without any problem.
15:17:04
shka
memory allocator developed at work is utter crap and was overwritting memory even after few years since first commit
15:20:13
beach
I guess. But it is more likely that the person who wrote your allocator tried to come up with his or her own algorithm and data structure, and then just got it wrong.
15:29:47
beach
I picked Doug Lea's algorithm and data structure because Paul Wilson, in his allocator survey, found that this one is the best one he tested. Plus, like I said, it is dead simple.
15:53:21
beach
And you might need the documentation (which does not quite reflect what I ended up coding).
15:55:17
beach
shka: They basically conclude that all the research done on memory allocation prior to their survey came to the wrong conclusions because of bad assumptions.
15:58:38
beach
I scanned the Wikipedia articles on SLAB, SLOB, SLUB and they look very specialized compared to what Doug Lea is doing.
16:02:02
beach
I still get objections to my suggested memory management like "but what about fragmentation?", and this is despite the fact that it has been known for 20 years that fragmentation results were caused by incorrect assumptions about program behavior. For a field that is supposed to move as fast as CS, it is moving pretty slowly.
16:04:57
beach
Yes. That objects in the global heap do not move, and that I use an ordinary malloc()/free() style allocator for the "racks".
16:06:09
beach
Not only that. I also use nursery heaps that are compacting, so objects that are promoted to the global heap are very likely to be long-lived.
16:06:36
beach
I can be wrong of course, and only experience can tell, but I feel pretty good about this being the right way.
16:36:38
beach
So in a Common Lisp environment where presumably allocation is more frequent, it is good to have something faster.
16:37:57
beach
Plus the nursery is a sliding collector, so the ages of objects are much more precise than in a semi-space copying collector.
16:38:14
didi
Just one data point, but my current problem isn't with the speed of the GC, but with the space it needs to work. It needs almost double the memory to work.
16:39:07
beach
didi: Yes, that's another characteristic of a traditional semi-space copying collector.
16:40:32
beach
didi: Though, if you look at the literature, basically any collector has this kind of trade-off. If you give it significantly less than twice the amount of memory you actually need, it is going to collect increasingly more often.
16:40:38
JuanDaugherty
CS as a whole moves fast in some areas, particularly hardware but there's the generational turnover
16:42:40
JuanDaugherty
some cultures also move faster than others, compare CL in 1989 vs now to SML then vs haskell now
16:45:25
shka
beach: anyway, i don't see anything wrong with your assumption and they seem to follow my expirence closely
17:24:43
phoe
I split the file binary.lisp in two halves and it compiles on SBCL under normal heap sized.
17:38:38
p_l
JuanDaugherty: a lot of Haskell "speed" was due to explicit disregard for usability outside of academic papers
17:39:34
JuanDaugherty
i take it you mean speed of development, given that it was gradual from the mid 90s
17:42:23
JuanDaugherty
it inflated during the 96-06 period to roughly the current thing, then deepened
17:43:22
JuanDaugherty
cl on the other more or less just stagnated, insofar as innovation is concerned, with a solid free implementation and some pkgs being the main thing I know of
17:44:34
p_l
JuanDaugherty: a lot of difference is that Haskell is the baby of academic PL research, you can see similar effort in Racket which has same case
18:17:30
jasom
beach: One thing I've noticed is that some of the literature assumes most allocations are small; e.g. comparisons of space overhead in mark/sweek vs. semispace often assume that the typical allocation size is 2 words (i.e. a cons cell). I also agree with didi, in that a lot of real world code these days run on VMs, which are often quite tightly memory bound, so going over your memory allocation means swapping,
18:17:32
jasom
which hurts more than more frequent GCs. Also I have often seen SBCL quit with the heap exhausted just because it is so conservative about when to GC that it waits until it is too late...
18:19:25
jasom
Were earlier time-share systems that lisp ran on single address space? VMs tend to have fixed partitioning of RAM, which means CPU time can be time shared, but memory cannot...
18:46:02
p_l
jasom: also, a lot of problems with GC'ed languages these days is at the huge heap sizes
18:47:49
jasom
The graph on the left was not made by me, but it fits my experience with SBCL: https://tech.grammarly.com/assets/articles/lisp-mem.jpg
18:48:25
jasom
for maximizing throughput you put off GC as long as possible, but that has repercussions...
18:49:07
p_l
Azul, Shenandoah and similar make for nice solutions, but they aren't exactly doable for SBCL
18:49:35
jasom
I ran into a similar issue with a wear-leveling flash file system. It would have great throughput, but then just stop responding for 30 seconds. Investigation determined that it was delaying moving data as long as possible to decrease write amplification.
18:50:05
jasom
p_l: well the graph on the right was made after implementing a simple workaround for sbcl (just manually GC on a timer)
18:51:18
jasom
p_l: though the true metronome requires an incremental collector; if you set aside X% of CPU time for GCing and you can GC incrementally, you are now realtime.
18:51:55
jasom
or rather you can achieve realtime. True hard realtime systems require a lot of analysis, so it's the job of the various tools to make analysis tractible.
18:51:59
p_l
jasom: well, so long as you ensure that GC runs within specified quanta and not any longer you technically have metronome
18:54:10
jasom
but yes, SBCL with a 1GB heap and a background thread invoking the GC every few seconds can probably be made to meet a hard realtime requirement of 2-3 seconds :)
18:54:30
jasom
though there are a troublingly large number of places where GC is excluded in the runtime.
21:13:35
stacksmith
Good morning. Is there some hook in the pretty-printer to track its progress across the list being printed? For instance to know what object is printed on a fresh line, etc.
23:07:14
specbot
Pretty Print Dispatch Tables: http://www.lispworks.com/reference/HyperSpec/Body/22_bad.htm
23:09:00
pillton
It won't tell you what object is printed on a fresh line, but you could possibly use a custom stream and an entry in the dispatch table to get that information.
23:19:03
stacksmith
pillton: a browser/debugger of sorts. I would love to not reinvent the wheel with layout/indentation but have some idea about where things wind up.