freenode/#lisp - IRC Chatlog
Search
12:38:41
shka
what people here are using for numerical stuf in cl? i am looking namely for approximated integrals
13:05:24
jkordani
beach: re: What should tip you off here is that function names can also be of the form (SETF <symbol>).
13:06:37
jkordani
so the idea that a function associated with a symbol "is stored in a slot of the symbol" is actually an implementation specific description of "where to access a function"
13:07:36
beach
If there were a function slot for each symbol, there would then have to be two, one for the function named after the symbol and another for the function named (SETF <symbo>).
13:08:29
beach
It's not that important. I guess some implementations would have two "cells" in each symbol.
13:09:22
beach
So implementations that existed than might have a single function cell in each symbol.
13:09:35
jkordani
well that last sentence still doesn't make sense to me, what is a function named (setf <symbol)
13:11:36
beach
From the glossary: function name n. 1. (in an environment) A symbol or a list (setf symbol) that is the name of a function in that environment. 2. A symbol or a list (setf symbol).
13:11:56
jkordani
in order to use the right setf for a given symbol it too needs to be stored somewhere, and is also a function
13:12:16
jkordani
one association could simply be as a slot in the symbol, or could be some other implementation
13:14:32
beach
As it turns out, in for SICL, I store the functions in a first-class global environment object instead of in the symbols.
13:17:23
beach
No, I create a cell in the environment. When the code is loaded from FASL, or compiled, the code refers directly to the cell in a lexical variable.
13:20:54
White_Flame
well, most implementations probably have the symbol-function as a slot on the symbol object itself
13:21:44
Bike
like i think sbcl stores compiler macros and type definitions in a global table rather than the symbol
13:22:32
White_Flame
given the ratio of fdefined symbols vs non-fdefined ones, and the overhead of the hashtable
13:23:45
White_Flame
yes, I have those desires myself. Just not to embark on a total rewrite myself :)
13:24:38
White_Flame
not of a CL implementation, but I like many others have tons of notes on a lisp-derived language
13:24:49
White_Flame
I've implemented quite a few languages of varying sorts for internal commercial use
13:26:08
Bike
honestly, it kind of surprises me how first class environments aren't in anything important. we all used them when we wrote our first scheme implementation
13:27:26
beach
Bike: I think most suggested implementations take a hash-table lookup hit for each function call, and that is unacceptable.
13:27:48
beach
Bike: Scheme people don't care as much because they are often not into performance as much as Common Lisp people are.
13:28:35
Bike
i've implemented a scheme derivative (not mine) with them, but the language overall was outright hostile to compilation
13:29:28
White_Flame
at some point, probably after I'm retired, I might attempt a JIT/dynarec Lisp implementation
13:39:49
Bike
individual atoms would bond with each other and worse, possibly atoms of an attacker. totally insecure
13:40:23
TRS-80
having bit of issue in Emacs trying to set up CalDAV sync using org-caldav, however I feel like my problem is lisp related, missing parenthesis, backtick, etc... because I had it all working last step, but next incremental step now not working. Pasta incoming.
13:42:39
beach
TRS-80: #lisp is better in that respect, but unfortunately (for you) reserved for Common Lisp.
13:42:45
jackdaniel
IRC is an asynchronous protocol, getting an answer may take time (or questions may be not answered) - either way that's the place to ask such questions
13:43:00
jackdaniel
this is a good essay about asking questions btw: http://catb.org/~esr/faqs/smart-questions.html
13:43:05
warweasle
There is no better example of the iterated 3-tank problem than why common lispers uses emacs.
13:43:16
beach
I once asked a question in some Linux music channel and it took someone a few weeks to answer. :)
13:44:19
warweasle
TRS-80: This is the dark underbelly of the internet. It was one of the first applications to gain traction and is still in use because it is so simple.
13:45:36
warweasle
You know the 3 tank problem where there are 3 tanks, strong, medium and weak? Well the weak one usually wins.
13:46:00
kaun
Are pre-RTFM questions OK here? I was wondering if visibility of fields in CLOS classes could be controlled?
13:46:26
minion
flip214, memo from jmercouris: Thanks for the heads up, I'll have to consider the tradeoffs in distribution vs execution time (e.g. the binary size vs startup time)
13:47:54
beach
kaun: The package system is used to control visibility of any name that is a symbol, and that includes slot names.
13:48:41
TRS-80
warweasle: you have no idea how often I say that (I don't actually even have a laen, but...)
13:49:44
jmercouris
I was just trying to make the point, that I clicked the link, and disagreed with it
13:50:14
jmercouris
The introduction immediately put me off, makes it sound like I have to go through some gang initiation to get support for my problems
13:55:11
kaun
People who only started programming in the 21st century look for visibility control, I think. Not finding it soon enough increases the perception that Lisp isn't suited for typical systems.
13:56:16
kaun
Well, I haven't read all of the guides. Gave up on Touretzky's as too basic, now happily munching through PAIP.
13:57:08
beach
kaun: We get lots of that kind of stuff here, and it gets really boring after a while. It seems directed to some kind of body of people in charge of everything abound Common Lisp, and the members of this body are told to do things differently, so that newcomers can be happier. The problem is that there is no such body, so nobody is listening.
13:57:46
beach
kaun: You are just going to have to roll up your sleeves and write one in the style that you would like to see.
13:58:40
kaun
beach: It was an observation. I had to ask to get to know about it. It isn't something to be fixed.
14:00:35
kaun
beach: true. I think the biggest problem for fledgling hobbyists is the terseness of Lisp code; it makes the initial hobby projects look childish.
14:01:41
Xach
I know I've thought more than once "how can something so small be useful?" and it usually is very, very useful.
14:03:37
loke
Hello Xach. I was about to bug you about 1530, but I noted it's been taken care of. :-)
14:03:43
jmercouris
I know I'm doing something wrong, but I'm trying to intern a symbol in the keyword package, from a string
14:04:00
beach
kaun: If you need help, all you have to do is ask. Common Lisp is not mainly meant for fledgling hobbyists, so I can see why they would have problems initially, especially if they expect a toy language.
14:04:24
Xach
jmercouris: as you know, ":" is not part of the symbol name, but a marker between the package name and the symbol name.
14:06:14
TRS-80
in fact jmercouris issue looks almost exactly like the one I'm wrestling with presently
14:06:50
kaun
beach: the problem is mainstream languages set expectations toy-language level, you need to gain perspective after learning a bit of CL to realize your hobbies can afford to be a lot more ambitious.
14:07:21
TRS-80
beach: there are packages in Emacs, yes. And a package manager (package.el) And repositories (ELPA, Org, MELPA, etc.). Not sure if that answer your question?
14:08:31
beach
TRS-80: A package in Common Lisp is a mapping from symbols to objects. It is used to control access to "modules".
14:11:25
TRS-80
I seem to be having very similar list related issue (well maybe not, I could be misunderstanding) but since I just had it working prior to latest step I feel like I am missing something simple, some ( or backtick or something. Anyway here it is: https://paste.pound-python.org/show/ue52FvKMIeqWyF8vBmTq/ Line 2 is what worked before, but line 7 now is not working. :/
14:33:17
jmercouris
TRS-80: this is not an elisp question, this is an emacs org-mode configuraiton question, you should ask on #emacs, there are many many users present
15:12:10
antoszka
so the TERMINFO package might require an update as it only seems to support a single magic type:
15:14:50
beach
I wrote some code for testing the Doug Lea style memory allocator that I wrote the other day. The testing code submits random sequences of requests for allocating or freeing chunks. I use a Markov process so that it is likely that there are long-ish sequences of allocations and long-ish sequences of frees.
15:14:52
beach
Memory is simulated, so each "memory" access takes a long time. I started a test with 10 million operations and it has been running for several hours. By now it has completed almost 3 million operations without any problem.
15:17:04
shka
memory allocator developed at work is utter crap and was overwritting memory even after few years since first commit
15:20:13
beach
I guess. But it is more likely that the person who wrote your allocator tried to come up with his or her own algorithm and data structure, and then just got it wrong.
15:29:47
beach
I picked Doug Lea's algorithm and data structure because Paul Wilson, in his allocator survey, found that this one is the best one he tested. Plus, like I said, it is dead simple.
15:53:21
beach
And you might need the documentation (which does not quite reflect what I ended up coding).
15:55:17
beach
shka: They basically conclude that all the research done on memory allocation prior to their survey came to the wrong conclusions because of bad assumptions.
15:58:38
beach
I scanned the Wikipedia articles on SLAB, SLOB, SLUB and they look very specialized compared to what Doug Lea is doing.
16:02:02
beach
I still get objections to my suggested memory management like "but what about fragmentation?", and this is despite the fact that it has been known for 20 years that fragmentation results were caused by incorrect assumptions about program behavior. For a field that is supposed to move as fast as CS, it is moving pretty slowly.
16:04:57
beach
Yes. That objects in the global heap do not move, and that I use an ordinary malloc()/free() style allocator for the "racks".
16:06:09
beach
Not only that. I also use nursery heaps that are compacting, so objects that are promoted to the global heap are very likely to be long-lived.
16:06:36
beach
I can be wrong of course, and only experience can tell, but I feel pretty good about this being the right way.
16:36:38
beach
So in a Common Lisp environment where presumably allocation is more frequent, it is good to have something faster.
16:37:57
beach
Plus the nursery is a sliding collector, so the ages of objects are much more precise than in a semi-space copying collector.
16:38:14
didi
Just one data point, but my current problem isn't with the speed of the GC, but with the space it needs to work. It needs almost double the memory to work.
16:39:07
beach
didi: Yes, that's another characteristic of a traditional semi-space copying collector.
16:40:32
beach
didi: Though, if you look at the literature, basically any collector has this kind of trade-off. If you give it significantly less than twice the amount of memory you actually need, it is going to collect increasingly more often.
16:40:38
JuanDaugherty
CS as a whole moves fast in some areas, particularly hardware but there's the generational turnover
16:42:40
JuanDaugherty
some cultures also move faster than others, compare CL in 1989 vs now to SML then vs haskell now
16:45:25
shka
beach: anyway, i don't see anything wrong with your assumption and they seem to follow my expirence closely
17:24:43
phoe
I split the file binary.lisp in two halves and it compiles on SBCL under normal heap sized.
17:38:38
p_l
JuanDaugherty: a lot of Haskell "speed" was due to explicit disregard for usability outside of academic papers
17:39:34
JuanDaugherty
i take it you mean speed of development, given that it was gradual from the mid 90s
17:42:23
JuanDaugherty
it inflated during the 96-06 period to roughly the current thing, then deepened
17:43:22
JuanDaugherty
cl on the other more or less just stagnated, insofar as innovation is concerned, with a solid free implementation and some pkgs being the main thing I know of
17:44:34
p_l
JuanDaugherty: a lot of difference is that Haskell is the baby of academic PL research, you can see similar effort in Racket which has same case
18:17:30
jasom
beach: One thing I've noticed is that some of the literature assumes most allocations are small; e.g. comparisons of space overhead in mark/sweek vs. semispace often assume that the typical allocation size is 2 words (i.e. a cons cell). I also agree with didi, in that a lot of real world code these days run on VMs, which are often quite tightly memory bound, so going over your memory allocation means swapping,
18:17:32
jasom
which hurts more than more frequent GCs. Also I have often seen SBCL quit with the heap exhausted just because it is so conservative about when to GC that it waits until it is too late...
18:19:25
jasom
Were earlier time-share systems that lisp ran on single address space? VMs tend to have fixed partitioning of RAM, which means CPU time can be time shared, but memory cannot...
18:46:02
p_l
jasom: also, a lot of problems with GC'ed languages these days is at the huge heap sizes
18:47:49
jasom
The graph on the left was not made by me, but it fits my experience with SBCL: https://tech.grammarly.com/assets/articles/lisp-mem.jpg
18:48:25
jasom
for maximizing throughput you put off GC as long as possible, but that has repercussions...
18:49:07
p_l
Azul, Shenandoah and similar make for nice solutions, but they aren't exactly doable for SBCL
18:49:35
jasom
I ran into a similar issue with a wear-leveling flash file system. It would have great throughput, but then just stop responding for 30 seconds. Investigation determined that it was delaying moving data as long as possible to decrease write amplification.
18:50:05
jasom
p_l: well the graph on the right was made after implementing a simple workaround for sbcl (just manually GC on a timer)
18:51:18
jasom
p_l: though the true metronome requires an incremental collector; if you set aside X% of CPU time for GCing and you can GC incrementally, you are now realtime.
18:51:55
jasom
or rather you can achieve realtime. True hard realtime systems require a lot of analysis, so it's the job of the various tools to make analysis tractible.
18:51:59
p_l
jasom: well, so long as you ensure that GC runs within specified quanta and not any longer you technically have metronome
18:54:10
jasom
but yes, SBCL with a 1GB heap and a background thread invoking the GC every few seconds can probably be made to meet a hard realtime requirement of 2-3 seconds :)
18:54:30
jasom
though there are a troublingly large number of places where GC is excluded in the runtime.
21:13:35
stacksmith
Good morning. Is there some hook in the pretty-printer to track its progress across the list being printed? For instance to know what object is printed on a fresh line, etc.
23:07:14
specbot
Pretty Print Dispatch Tables: http://www.lispworks.com/reference/HyperSpec/Body/22_bad.htm
23:09:00
pillton
It won't tell you what object is printed on a fresh line, but you could possibly use a custom stream and an entry in the dispatch table to get that information.
23:19:03
stacksmith
pillton: a browser/debugger of sorts. I would love to not reinvent the wheel with layout/indentation but have some idea about where things wind up.