freenode/#lisp - IRC Chatlog
Search
2:02:16
asarch
Is there any way to get the PNG file of the (com.informatimago.common-lisp.picture.cons-to-ascii:draw-list *values*)?
2:04:53
pjb
asarch: otherwise, there are ascii-art web services that will convert that into a svg and png…
2:06:59
no-defun-allowed
pjb: Well yes, screenshotting text is annoying, but no need to hang people who do that.
2:13:32
pjb
You can use: (progn (with-open-file (*standard-output* "/tmp/aa.txt" :direction :output :if-does-not-exist :create :if-exists :supersede) (com.informatimago.common-lisp.picture.cons-to-ascii:draw-list *values*)) (uiop:run-program "ditaa /tmp/aa.txt") (uiop:run-program "open /tmp/aa.png"))
2:44:31
LdBeth
It seems to be the only comprehensive book on macro assemblers http://www.davidsalomon.name/assem.advertis/AssemAd.html
5:50:43
no-defun-allowed
I think there was a front-end processor file system which was probably for booting and a LMFS which was for user data.
5:52:20
no-defun-allowed
The MIT CADR stuff got released to http://www.unlambda.com/cadr/, and I guess the (relatively) newer Symbolics machine OS is still available if you can find a lispm or can spend $5,000 on the emulator.
5:59:34
remexre
hm, I think I might have a memory leak in my interfacing w/ an FFI lib on SBCL; is there an easy way of determining where it's coming from?
6:00:28
White_Flame
HiRE: the emulator was around way back, ever since x86 started getting faster than the lisp hardware
6:01:09
HiRE
White_Flame, I see so it was there to maintain compatibility with old hardware during the transition
6:01:42
HiRE
my AI professor routinely talked about symbolics machines but I was surprised when it was mentioned someone might still be using one :P
6:03:08
beach
HiRE: The way the tools are integrated can not be replicated on Unix-like systems because of the process model of those systems.
6:04:53
beach
HiRE: I mean, the process model of "modern" operating systems requires tools to turn everything into a sequence of bytes in order to communicate with others.
6:05:31
HiRE
beach, that sort of confuses me. How did the symbolics machine differ? I figured it'd still need to translate lisp code down to CPU language
6:06:11
HiRE
> Symbolics' initial product, the LM-2, introduced in 1981, was a repackaged version of the MIT CADR Lisp machine design. The operating system and software development environment, over 500,000 lines, was written in Lisp from the microcode up, based on MIT's Lisp Machine Lisp.
6:06:15
no-defun-allowed
It's not really much of a CPU problem as much as it is an OS problem, but they did have CPUs that had instructions that mapped closer to Lisp functions.
6:06:28
beach
The important thing is that an application can just hand a pointer to another application.
6:07:30
no-defun-allowed
The OS didn't have seperate memory spaces and had a global garbage collector (I don't think they were first to it, but the lispms had generational collection very early on), so it was possible to pass objects around very easily between threads and functions.
6:08:49
no-defun-allowed
You could skim http://bitsavers.org/pdf/symbolics/software/genera_8/Genera_Concepts.pdf too.
6:09:26
beach
HiRE: There are already two such systems, Movitz and Mezzano, but what I am planning is a bit more sophisticated.
6:12:32
beach
HiRE: Here is one problem: People see the interactive REPL. Then they think "interpreter", and then, of course "interpreter" implies "slow".
6:12:49
no-defun-allowed
A Lisp operating system could be faster than a Un*x system in several ways, too.
6:13:24
HiRE
beach, yeah. A similar argument was made a few years back for haskell. Unfortunately the "REPL = Slow" argument is pervasive.
6:13:33
beach
HiRE: This implication is incorrectly assumed by ignorant people who do not know much about language implementation.
6:14:20
HiRE
beach, right. Whats even more amazing is even talking to graduates in CS the belief is still pervasive.
6:15:37
HiRE
no-defun-allowed, how do you think a lisp operating system could be faster than *nix? I have no dog in the fight I'm just curious.
6:16:57
no-defun-allowed
Again, data sharing is faster as it's literally moving a pointer, with no serialisation or memory spaces to deal with.
6:19:13
loke
HiRE: You could use an Atari ST right now. No context switcihing (yes, I know, it's a joke)
6:19:17
no-defun-allowed
A tracing garbage collector is typically much faster than "typical" manual memory management, so some programs which cons a lot can expect to run faster (but avoiding consing is sometimes a good optimisation strategy).
6:19:27
beach
HiRE: I keep saying this here, but that's because everyone is so convinced that we have to program as if we have access to the entire memory (including the stack) of a physical machine, just like we did 60 years ago.
6:20:34
no-defun-allowed
Then when threads are involved, they can probably be made much more light-weight than in Un*x, so context switches would be much faster when they are necessary.
6:20:52
White_Flame
genera still has multithreaded support, so it does context switching in that aspect, but doesn't change any global config like processes
6:22:21
White_Flame
although other people can network in and get their own REPLs and desktop and such, it's all in the same flat image
6:23:10
HiRE
I guess because I never experienced a lisp machine I never thought of processes as anything different than a nix system
6:23:18
no-defun-allowed
And to clarify, not requiring context switches between different, well I'll call them "contexts" while I search the book to see if there is a better word, such as between file system and user code or a hardware interrupt handler and a server that uses that hardware, decreases latency too.
6:24:06
HiRE
it makes sense. I can see why you said a garbage collector can perform better than other memory management forms
6:24:35
no-defun-allowed
That alone could make datacenter peoples' eyes roll into dollar signs as they do in cartoons.
6:25:09
beach
It was invented so that the Bell Labs people could use as large a subset as possible of Multics, but on a tiny machine.
6:25:11
White_Flame
the entire OS is basically made up of the same plain function calls as everything else
6:25:43
White_Flame
unix is great if your entire model of computing is expressable in individual, independent lines of plain 7-bit text
6:26:02
HiRE
beach, right without an MMU they'd need to swap stuff in and out (which is what we learned in CS undergrad)
6:26:07
no-defun-allowed
https://cacm.acm.org/magazines/2017/4/215032-attack-of-the-killer-microseconds is relevant to that; the authors have figured that fast storage devices won't play nicely with current designs.
6:27:15
no-defun-allowed
To be honest, I would rather have lighter threads than "asynchronous" code, since the latter usually involves special language constructs, CPS-converting in the programmer's head or both.
6:27:54
loke
All kinds of threads need that, unless you're tlaking about "green" threads (i.e. cooperative)
6:29:18
no-defun-allowed
ACTION probably shouldn't have mentioned it; any threading more complex than lparallel:pmap hurt her head.
6:30:52
HiRE
good first round interview gutcheck is to explain the difference between concurrency and parallelism
6:31:37
no-defun-allowed
Typically articles like that try to create solutions that would work in a Un*x environment, maybe because it is less drastic to do so.
6:33:11
no-defun-allowed
Oh dear....is concurrency when some tasks can run "simultaneously", splitting up resources over time, and parallelism when they can truly run at the same time?
6:35:55
White_Flame
huh, yeah I guess the GC terms are a bit different when it comes to "concurrent"
6:37:39
no-defun-allowed
Yeah, a GC is "concurrent" if it can run at the same time as the mutator, but "incremental" if it splits up time between itself and the mutator.
6:38:29
HiRE
Opinion here but GC is a dirty word just like interpreted. I wish more people would learn that good GCs exist and it's just most of them are bad
6:49:09
asarch
How would you implement low-level code in the REPL for the processor? Would it be available in an own package? (cpu:int :address 80 :data my-code)?
6:49:50
White_Flame
you would generally generate an array of bytes that represent the machine code, and fiddle internal flag bits to get it in the exeuction stream
6:50:36
White_Flame
lisp implementations have various degrees of intermediate representation, and cpu-specific back ends, like any other compiler
6:52:34
White_Flame
which code? many of them are open source, and the DISASSEMBLE function will show you the compilation output
6:53:36
White_Flame
in SLIME, with your implementation fully installed with source references, you can use M-. to jump into the implementation Lisp code just as easily as your own code
6:54:25
no-defun-allowed
https://www.memorymanagement.org/ is a good reference for GC jargon and design.
6:55:45
no-defun-allowed
asarch: Yes, most Common Lisp implementations are written almost entirely in Lisp.
6:56:24
no-defun-allowed
https://github.com/sbcl/sbcl is (a mirror to) the SBCL source repository, and https://github.com/robert-strandh/SICL is the repository for SICL which might be easier to understand.
7:00:05
asarch
I was watching a video of the classic Street Fighter II video game and wondering if it was possible that they used Lisp (in some secret way) to develop the game
7:00:51
asarch
That game in its time was advanced (music, sprites, the speed when you pressed a button, etc)
7:03:45
asarch
When the programmers were entering all the required code so the game could run successfully
7:03:46
White_Flame
game development (and HPC) is typically very married to the specifics of the hardware, especially in that era
7:04:08
no-defun-allowed
There are lot of little tricks intricate knowledge of the hardware involved could give you. One simple example is that a programmer could provide more colours across the screen by counting the vertical lines drawn (which are generated sequentially) and changing palettes at specific counts.
7:04:22
White_Flame
so I wouldn't expect a very high level language, but rather a system of byte tokens driving states & animations
7:06:09
no-defun-allowed
Assembler macros are fairly popular, so I would expect some use of those to make assembler less annoying while knowing exactly what code is generated.
7:06:41
no-defun-allowed
Those aren't related to Lisp macros, which is a good reminder that we are here to talk about Lisp.
7:08:29
asarch
And my next question about the game was, "can we nowadays with an Arduino/Raspberry/etc device emulate the conditions those programmers had and make a mini version of the game using Common Lisp instead of Assembly?"
7:09:34
no-defun-allowed
A Raspberry Pi can already run many Common Lisp implementations (though, sadly, only Clozure and ECL seem to work on 32-bit with threads).
7:10:33
HiRE
yeah, it still supports a small subset of lisp for programming the interface/building macros
7:10:34
no-defun-allowed
An Arduino is right out though, unless you can deal with an interpreter and a very small heap.
7:11:20
White_Flame
Common Lisp is a language with a large set of high level core features, and I would not expect it to run on stuff smaller than a rpi
7:11:59
White_Flame
I fiddled with a dos-based lisp at some point way back, but I don't know if it was a common lisp
7:13:46
asarch
Back old days when Borland C++ for MS-DOS was the major compiler, I used to start the graphic mode using its BGI device, draw a filled box, enabled the mouse pointer thought a DOS INT and tried to move this box around the screen, however, I never could repaint the area where the box was
7:15:15
no-defun-allowed
Remember a real-mode machine only has access to 640KB memory, and a Raspberry Pi has more than 1GiB memory (unless you have a RPi 1). There are a few magnitudes of difference between them.
7:18:09
White_Flame
a segment was basically the high 16-bits of a 20-bit pointer (so shift the segment address left 4 bits to get the start of the segment), then the offsets are plain 16-bit byte pointers
7:18:50
White_Flame
so you could slide around 64KB windows in the 20-bit address space, at a granularity of 16 bytes
7:21:38
White_Flame
the 640KB thing seemed to be part of the IBM PC hardware design, not a limitation of the x86 chip itself
7:26:35
no-defun-allowed
The Arduino has a split code/data model which would make native code generation impossible, ignoring the obvious memory constraints.
7:30:27
no-defun-allowed
I would use the adjective "futile" for trying to run Common Lisp on such a small device.
7:30:49
p_l
no-defun-allowed: any lisp implementation that works on recent OpenBSD without modifying system would be good candidate for arduino :)
7:31:17
no-defun-allowed
μLisp could work though, and it is almost a subset of Common Lisp if you ignore the single function and value namespace: http://www.ulisp.com/
7:41:08
asarch
Anyway. Thank you, thank you very much for all the info guys, thank you. Have a nice day! :-)
7:44:18
reepca
Is there a way to conditionally indent within a line with the pretty-printer? I'm pretty-printing some unstructured code that alternates between instructions and labels. I need the labels to be on their own line at a certain lesser indentation, and the code to be on its own line (but with multiple instructions on the same line as much as possible). However, labels can come one after the other, so setting the indentation prior to
7:44:19
reepca
finishing one label doesn't work, since it's necessary to know whether the next line will have another label or instructions.
7:45:16
another-user
i tried to write bf2 CL benchmark for https://github.com/kostya/benchmarks but it's slower than Racket, how can i speed this up? https://dpaste.org/rsEY
7:47:09
another-user
they used unsafe-vector-* but otherwise the program looks sameish: https://github.com/kostya/benchmarks/blob/master/brainfuck2/bf.rkt
7:48:19
reepca
the way I have it currently it all looks okay until labels are adjacent to each other, at which point there are blank lines between them because I use ~& and an indented line isn't "fresh". But if I finish a label without the indentation, how do I indent that line? Indentation only occurs by the pretty-printer after newlines apparently.
7:49:57
White_Flame
oh, and you're using adjustable & fill-pointer, that makes all of your arefs expensive
7:50:25
no-defun-allowed
You could change the representation to use structures, which might be faster than accessing a list.
7:56:03
smokeink
reepca: you can check out the "out" macro in ytools, it has some functionality related to indenting if I remember well
7:57:12
no-defun-allowed
Using a fixed tape of 4,096 fixnums drops the execution time to about 6.5 seconds from 15.
7:57:15
p_l
BTW, regarding CL on microcontrollers, remember that writing compilers is a classic Lisp technique ;)
7:58:00
no-defun-allowed
You could probably drop the "interpreting" overhead entirely by converting the Brainfuck program to a Lisp program and passing it to COMPILE.
7:59:19
aeth
no-defun-allowed: yes, that's what I do with my Brainfuck... I just compile it down to a lisp program that's like (incf (aref a *position*) 1) etc.
7:59:36
no-defun-allowed
In a chip8 interpreter I wrote long ago, naive compilation gave me at least a 10x speedup over interpretation.
7:59:43
aeth
no-defun-allowed: idk how far I go, there are basically a ton of optimizations you can do to produce better output. e.g. instead of incf'ing one at a time, you can see how many +++s there are and incf by that amount
8:02:33
another-user
White_Flame, no-defun-allowed: thank you for tips! algo must be similar between implementations - other impls use infinite tape
8:03:08
no-defun-allowed
ACTION will test compiling next. She really likes making compilers for some reason.
8:03:30
aeth
another-user: I think generally, the tape that's used in Brainfuck is wrapping tape rather than truly infinite. So some large circular buffer as an array.
8:04:47
White_Flame
if you manually manage the tape growth, instead of using vector-push-extend, then your arefs will be simple inline accessors
8:06:09
aeth
Yes, but if you have a fixed length tape and you compile the Brainfuck as a function, then there are no length checks at all because the tape doesn't grow. Even if it can change from program to program, the AREF specific to that program will know the length (assuming it's all in one function or there are declarations)
8:14:32
another-user
no-defun-allowed: i tried https://dpaste.org/tTEB but it didn't win much time(~1s)
8:15:27
no-defun-allowed
I'll show you my code after I write that kind of reader. The other two big wins are inlining everything and using a simple-array to get AREF really fast.
8:20:14
no-defun-allowed
You can add the unsafe declaration to the generated code in compile-bf-program, which takes it to 1.35 seconds, but admittedly I don't think it's usually worth doing so in real programs.
8:26:07
no-defun-allowed
Is there such a thing as a regular expression that operates on lists, and is there an implementation of that?
8:28:38
no-defun-allowed
aeth: I would want to merge all the tape-inc instructions together, preferably by writing a rule like (:many (tape-inc ?x)) => `(tape-inc ,(reduce #'+ ?x))
8:29:03
no-defun-allowed
That might be too much wishful thinking, and I could probably write it somehow without it.
8:30:03
aeth
no-defun-allowed: well, you could also do actual regular expressions and instead of parsing #\+ as (tape-inc whatever) you'd parse "++++" as (tape-inc whatever 4)
8:31:47
beach
no-defun-allowed: Check out the works by Jim Newton. That's exactly what they are doing.
8:33:59
no-defun-allowed
aeth: I modified the reader to merge multiple + characters together into one operation, and that takes the execution time to 0.8 seconds.
8:35:07
smokeink
in sbcl loading a .lisp file that has (declaim (optimize (safety 3) (debug 3) (speed 0) (space 0))) , should actually change the policy? when I do (describe-compiler-policy), it's not changed
8:35:45
smokeink
if I run sbcl with (declaim (optimize (safety 3) (debug 3) (speed 0) (space 0))) in sbclrc , it works, the policy is changed
8:36:49
no-defun-allowed
So, another-user, using a compiler such as that one creates code that runs 30x faster than the C++ interpreter in that repository.
8:37:13
jackdaniel
beach: to be precise, declaim is guaranteed to take effect for a file at compile time in which it is a top-level form, it is not specified whether effect lingers after the file is compiled
8:38:17
aeth
no-defun-allowed: There are some other idioms you can detect other than just a bunch of +s and -s in a row. The more you can detect, the better it will run. I mean, for someone who tries to "structure" Brainfuck rather than code golf.
8:39:31
White_Flame
declaiming non-optimization stuff liek types does stick past the file context, iirc
8:40:30
jackdaniel
White_Flame: it *may* stick, it doesn't necessarily stick (you can't portably rely on that) - use proclaim in eval-when to assure that they stick
8:41:36
beach
White_Flame: The standard allows for the startup environment and the compilation environment to be separate.
8:44:05
no-defun-allowed
Folding the other characters like that yields only a .02 second improvement, but hopefully that is more than what was expected for the benchmark.
8:45:08
no-defun-allowed
The readme does say "It interepter all bf instructions, one by one, without any squash or other hacks", but it's not *my* fault those other languages can't let you use the compiler at runtime.
8:46:35
beach
smokeink: It seems that in SBCL, this particular aspect is separate in the run-time environment and the compilation environment.
8:46:50
smokeink
beach: if I do sbcl --no-userinit --no-sysinit --load ~/.sbclrc (.sbclrc contains that eval-when) it has no effect. but if i just do sbcl , it has effect
8:48:07
smokeink
I need it to have effect when I use --load , because I do some stuff before loading it
9:08:59
smokeink
SBCL 2.0.0.73-7d05b4c https://paste.ofcode.org/uc2PrgJASAaWgkS5FsUBGA during macroexpansion of (SB-PCL::%DEFMETHOD-EXPANDER INSPECT-OBJECT-USING-STATE ...)) failed AVER: (= SB-C::COMPONENT-TLF-NUM SB-C::TLF-NUM) This is probably a bug in SBCL itself.
9:14:34
jackdaniel
smokeink: did you try to load the dependencies and compile the file after that without the quickload forms?
9:15:18
jackdaniel
I can imagine that first macros are expanded with old definitions, then quickload is executed and some internal references become bonkers