freenode/#lisp - IRC Chatlog
Search
20:44:41
jeosol
to add some more context: the problem of going back to C++ is that most of what I was able to do is because of features in CL (and CLOS), I may need to dumb down offerings or not have a feature-for-feature match moving to C++.
21:04:06
Bike
CLHS gives the example (adjoin '(new-test-item 1) '((test-item 1)) :key #'cadr) => ((test-item 1))
21:04:51
Bike
The actual description of ADJOIN, however, says it works by the usual two-argument test rules, which include the key not being called on the item
21:05:46
Bike
Because otherwise it would be doing (eql '(new-test-item 1) (cadr '(test-item 1))), which is false, so the result would be ((new-test-item 1) (test-item 1))
21:06:09
Bike
The example implementation of ADJOIN in terms of MEMBER explicitly calls the key on the item because MEMBER doesn't
21:11:25
sjl
> If :key is supplied, it is used to extract the part to be tested from both item and the list element, as for adjoin.
22:33:36
kristof
Especially when half of Kent Pitman's examples will have a return value of "IMPLEMENTATION DEFINED"
2:33:16
beach
mfiano: Yes, I intend to implement the GC in Common Lisp. Not in portable Common Lisp obviously. Some primitives for accessing memory are needed.
2:34:19
beach
Bike: I think Cliki has a place where issues with the Common Lisp HyperSpec are listed. It would be good to add this issue to that list.
3:50:10
beach
Suppose I want to implement a memory allocator using the technique of Doug Lea. That technique uses a sequence of "bins" sorted by chunk size. In general, having a large number (like 512 or more) of bins is advantageous for performance.
3:50:18
beach
However, there is a case that might need particular attention, namely searching for the first (i.e. best fit) bin that has at least one chunk that is greater or equal to a particular size that is being allocated. Finding the first potential bin can be done with binary search, and that would be cheap no matter how many bins there are.
3:50:25
beach
But that first potential bin might have no chunks in it. So one would have to find the first non-empty bin starting with the first potential bin. It would seem that a sequential search is called for here.
3:50:26
beach
But suppose I maintain a global bitmap of non-empty bins, and a bitmap for each bin with all zeros below the position for this bin and all ones for positions greater than or equal to this position. I can then AND the global bitmap and the bitmap for this bin and search for the first bit position that is set in the resulting bitmap.
3:58:06
beach
Even better: Given a number N between 0 and 64, how can I generate 64-bit bitmap with the N least significant bits set and the others cleared?
4:06:04
kruhft
and propagator techniques like message passing like Sussman does in SICP for his logic system
4:07:04
kruhft
and then hopefully some sort of switch to silicon compiler comes around to fill in the details
4:11:10
minion
kruhft: SICL: SICL is a (perhaps futile) attempt to re-implement Common Lisp from scratch, hopefully using improved programming and bootstrapping techniques. See https://github.com/robert-strandh/SICL
4:11:16
minion
kruhft: Cleavir: A project to create an implementation-independent compilation framework for Common Lisp. Currently Cleavir is part of SICL, but that might change in the future
4:11:50
jasom
I also saw on the clisp mailing lisp that someone got cleavir working with clisp, targeting clisp bytecode.
4:12:10
beach
kruhft: Clasp is using Cleavir, but also a few techniques that I designed specifically for SICL, namely fast generic dispatch and some more.
4:15:06
kruhft
vtomole: this is an implementation of the cpu i want to architect: https://github.com/burtonsamograd/explain
4:18:18
kruhft
i don't think there could be a 'simplest' cpu, but a SISC (single instruction set computer) would probably qualify
4:18:34
kruhft
something like Decriment and Branch if Zero would be a pretty simple computer implementation
4:19:13
kruhft
vtomole: if you're interested in working at the gate level in C++, you might find this library of mine interesting: https://github.com/burtonsamograd/anser
4:22:55
jasom
beach: is that to make FFI interop simpler, or just to explore a less used corner of the GC search space?
4:23:56
beach
jasom: It turns out that it makes FFI simpler, but that's not the main reason. It especially makes it easier to use a concurrent and parallel collector. Application threads don't have to be informed about objects that moved.
4:28:23
jasom
beach: yeah, I've been experimenting with an incremental collector (that might be possible to turn into a concrrent collector) that is just a simple semi-space that protects the FROM space with the MMU, but even then you end up complicating things like EQL.
4:29:10
kruhft
vtomole: i'm note sure exactly. there was a lot of research into it back when it was found out...like in the 50's or something
4:33:27
beach
jasom: When I made that decision, it was based on my being convinced about the results by Wilson: http://www.cs.northwestern.edu/~pdinda/ics-s05/doc/dsa.pdf page 14.
4:33:49
jasom
beach: to randomly switch back topics to the GF dispatch, what is the rack used for (other than the stamp, which is clear)?
4:38:44
jasom
After experimenting with my GC, it occurs to me that (other than fragmentation avoidance) the main advantage of a compacting collector is improving throughput, since allocation is a single pointer addition, and for copying collectors, nursery collections are often very fast. For concurrent collectors I think the costs to throughput of managing moving pointers may be greater than the savings in allocation
4:40:33
jasom
SBCL's allocator is significantly higher throughput than most malloc/free implementations on real-world workloads. The latency is what gets you though
4:40:59
beach
I am using a sliding collector for the per-thread nursery with no back pointers to manage.
4:42:29
jasom
beach: yes, basically the maximum time that the mutator is blocked by memory allocation.
4:44:08
beach
In my scheme, the application threads are only paused for a nursery collection. Allocation is usually done in the per-thread heap, which is just bumping a pointer.
4:46:29
beach
Also, by using a sliding collector for the nursery, I have a very precise idea of the relative age of objects.
4:46:29
beach
A traditional copying collector, i.e., one that promotes objects based on whether they survive the GC, runs the risk of promoting objects that have just been allocated and that will die soon. The sliding collector minimizes that risk.
4:47:17
Zhivago
You can simplify the process by having short lived processes where nothing survives the first generation. :)
4:47:30
jasom
if you make a pointer from the global heap to the head of a largish list in the nursery, will that require the entire list to be copied at once?
4:48:51
jasom
Zhivago: you joke, but I've seen C compilers that never free memory because they will exit after compiling a single file.
4:49:55
jasom
It's a reasonable strategy up to the point where more complex optimizations are added that require significant temporary space, and then half the compiler needs to be rewritten :(
4:49:56
Zhivago
If your computations are algorithmic, then space is bounded by the size of the current inputs, which means that given sufficient resources collection can always be avoided.
4:50:38
Zhivago
Then you break it up into multiple processes, and collection is replaced by output emission.
4:52:41
Zhivago
The main difficulty with GC is when there's no notion of consumer or justification for data to exist, which means that everything that might be used by anything needs to be maintained.
4:53:40
Zhivago
Once you say, "we are producing output for A, B, and C, from X" then it becomes much simpler.
4:53:58
jasom
Well this is where you see languages that disallow arbitrary graphs of pointers in order to allow all deallocations to be statically computed.
4:57:37
fe[nl]ix
beach: the original Unix machines were memory constrained and using lots of tiny processes reduces heap fragmentation across the whole machine because exit() is a garbage collection
4:58:31
beach
fe[nl]ix: I see. I don't think I have ever seen that as an explicit reason for processes.
4:58:57
beach
fe[nl]ix: If I can find the source of that information, I'll cite it in my LispOS paper. :)
4:59:07
fe[nl]ix
I believe that originally there weren't even daemons, and a typical machine would have the init process and one shell per user
5:01:27
fe[nl]ix
and the concept of a unix process as a separate and isolated address space came before Unix
5:25:15
beach
jasom: I am working on a fairly detailed specification of the planned SICL garbage collector(s). Would you be interested in reading it once I am done?
5:26:10
beach
Excellent. It is not imminent. There are a lot of details to work out. But I'll keep you in mind. Thanks.
5:49:34
drmeister
Is it a reasonable optimization to bind the function slot of symbols that are not fboundp to a function that prints that the function slot is unbound?
6:01:29
drmeister
Right - it should signal an error - how do you pass the name of the function to the error? Do you bind a closure that is closed over the name of the symbol?
6:24:19
jgkamat
hi, I currently have a server app I'm developing in CL, I can reload updates to it just fine when I'm using slime, but how would I go about doing it on the server (to have zero downtime)
6:26:33
jgkamat
I'd rather not do that, I was hoping for some lisp that I could run (I could make a server command that runs it) to reload all the files. Maybe I could (ql:quickload :my-project) again? Would that do what I want? :P
6:26:48
jackdaniel
so you make swank to listen for connection from localhost and tunnel your connection so it is done from localhost indeed
6:28:04
jgkamat
ah that makes sense, I'll look into it. I was planning on SSHing in, attaching to the tmux of the running session and reloading like that
7:00:08
jgkamat
another unrelated question, I have an error caught in a handler-case, how do I print the stacktrace of the error. I can print the type of error with format.
7:03:04
pillton
(apropos "backtrace") on SBCL shows a number of functions. e.g. (sb-debug:print-backtrace).
7:04:10
aeth
I've always just found things with SLIME tab completion, which breaks down at the hyphens
7:06:28
pillton
(quicklisp:system-apropos "backtrace") shows that there is a trivial-backtrace system as well. I have never used it though.
7:09:25
jackdaniel
jgkamat: trivial-backtrace is decent, but you don't want handler-case – you already lost backtrace by this time
7:10:24
jackdaniel
use handler-bind before stack is unwound (I remember there were some discussion about technical details why claiming that is incorrect, I don't remember the details though) - from practical perspective if you print backtrace from handler-bind you'll get backtrace of where error occured
7:10:25
jgkamat
hmm, my use case is essentially sending errors within a function somewhere else (in addition to the top level). Ie: logging errors to a file. Is there a better way to do that jackdaniel?
7:10:45
jackdaniel
when you print backtrace from handler-case you'll see backtrace of stack where handler-case is put
7:11:43
jackdaniel
it has log:error ; log:info etc, it may be configured to use syslog, a file, console and other appenders simultanouesly
7:12:47
jackdaniel
yes, you may log anything, errors, warnings, info, debug, trace and configure level of verbosity
7:14:47
jackdaniel
jgkamat: one more thing – keep in mind that handler-bind won't handle the error by default – if it returns normally you'll still have an error
7:15:33
jackdaniel
one way around that is using (block nil (handler-bind … (return nil) …)) or writing custom handler-case* macro which expands into that
7:15:34
jgkamat
yah, that makes sense, that's actually what I want, so I might choose handler-bind if I'm too lazy for log4cl :P
7:16:37
jackdaniel
I usually call log4cl's error logging from the said handler-case* appropriate handler body
7:20:34
jackdaniel
(because logging and error handling are two orthogonal things if you think about it)
7:28:58
shrdlu68
Slightly off-topic question: does Clojure owe its popularity to integration with the Java ecosystem? Why hasn't ABCL seen the same level of success?
7:31:21
jackdaniel
shrdlu68: my personal opinion is that Clojure hooked into many trendy topics: functional programming, JVM ecosystem etc. so it felt fresh
7:31:54
jackdaniel
ABCL on the other hand is traditional CL which doesn't have recpies how you should program (you have dozen of tools for that) – also CL has some historical heritage what may be not appealing to everyone
7:32:40
jackdaniel
so (once again – IMO!) Clojure is more popular because it had more appeal – "modern" language for "alternative" programmers
7:32:55
jackdaniel
of course after initial traction many other aspects came into play - i.e more contributors
7:33:20
shrdlu68
Does someone have a link to a certain work that someone created in CL, something to do with an implementation of a purely functional "framework" in CL?
7:35:59
jackdaniel
(I mention lparallel/lfarm/clazy because these are libraries tackling other "trendy" topics)
8:21:12
beach
drmeister: I don't do it for every symbol since symbols don't have a function cell. I do it in the first-class global environment when code is loaded (what I call "tied") into that environment that refers might call the function with that name.