freenode/#lisp - IRC Chatlog
Search
18:49:19
scmlinux
Could someone please share a tutorial on the installation of CLSQL in GNU CLISP? Bonus points if it has simple examples of its usage too.
19:10:56
jackdaniel
random-nick: McCLIM ;-) see frequently asked questions here: https://common-lisp.net/project/mcclim/involve
19:13:58
stacksmith
scmlinux: SBCL is used by a good majority of Lisp programmers... Anecdotally ~80% of market share.
19:21:48
pjb
Too bad quicklisp download stats don't dispatch per implementation… This could be gathered easily by quicklisp…
20:18:28
fiveop
Hi, a question regarding FILE-LENGTH. The Hyperspec says "For a binary file, the length is measured in units of the element type of the stream".
20:24:00
comborico1611
scmlinux: I'm very new to Lisp, but I also would like to know of such a tutorial.
20:29:33
fiveop
it makes sense, because for variable length encodings you have to parse the whole file to determine its length, but why not mention that in the standard :/
20:37:10
aeth
stacksmith, pjb: Quicklisp doesn't have a tracking code built in for data or something, afaik. That's based on HTTP requests.
20:38:26
aeth
Or maybe a handful of people run a lot of SBCL servers that pull directly from Quicklisp?
20:39:18
aeth
SBCL is probably the most common by far, but I don't think that that blog post gives exact numbers.
23:02:03
pillton
shka_: alexandria:parse-ordinary-lambda-list does a lot of the work needed to do that task.
0:02:50
_death
unexpected mention of common lisp in https://blog.jessfraz.com/post/nerd-sniped-by-binfmt_misc/
1:24:30
dandruff
Would the Lisp Machine operating systems have been more portable and able to compete with Unix if they had been built around a Lisp that compiled to VM bytecode like Smalltalk?
1:45:03
caffe
the lisp machine processors don't actually understand lisp natively, it worked more in the sort of fashion you're describing
1:45:32
aeth
dandruff: The thing that killed Lisp Machines was, afaik, performance. The much cheaper commodity hardware eventually beat specialized hardware in Lisp performance.
1:48:18
aeth
krwq: Well, I tried implementing it. I decided it wasn't worth my time to continue along that path because of how hard it is, even compared to directly writing in assembly. It looks like dandruff's link takes a much easier approach. Modify an existing C compiler, and then simply write a Lisp interpreter.
1:49:54
aeth
It was very hard. Strings that aren't NUL-terminated (it's trivial when they are) probably would take a week to write. I never finished that part.
1:52:59
dandruff
caffe: right, I remember hearing that they were stack machines hardware support for bignums and GC. I can't find much on them, though. I haven't even been able to find if they used a monolithic kernel or something else entirely. I've been thinking about writing a Forth implementation in assembly with their features as a compilation target for Lisp Machine Lisp and looking for the source code.
2:09:34
stacksmith
I've spent a lot of time exploring ideas like that (and a few decades of writing Forth-like system for almost every imaginable platform)... It's one of those things like trying to build an interstellar spaceship... A couple of decades of technology will make yours obsolete every time.
2:10:53
stacksmith
Stack machines are dead - register architectures beat them every time. It almost made sense a couple of decades ago to make super-simple super-fast Forth machines, but a decent compiler on a mobile phone can outstrip them for an general purpose tasks.
2:11:45
dandruff
caffe: Isn't the spec similar to Common Lisp's? Yikes. Anyone can implement Scheme in a day at most; SBCL couldn't even have been built from the ground up.
2:14:08
dandruff
stacksmith: so that means that it would be impossible to make a reasonably performant system where a stack machine on an FPGA handles the compiled OS language and communicates with a normal CPU?
2:15:22
caffe
dandruff: i'm not sure i'd start from the ground up... more likely taking something like SBCL or GCL and building down
2:16:22
stacksmith
Chuck more has some gigahertz async stack machines, like a hundred of them, but they are so small that they are useless.
2:18:40
stacksmith
Yup. Check out Green Arrays. There is no clock, they run as fast as possible. Generally stack machines have a couple of dozen instructions, and his processors do them all simultaneously while the decoder is figuring out the opcode. Then the one you want gets the results latched.
2:19:57
stacksmith
But, Green Arrays is a couple of hundred async cpus with something like 256 bytes of ram each. Just loading code into them makes me want to go take a bath.
2:23:09
caffe
if you really want to 'compete' with linux... the best way is to not compete with it at all
2:24:45
pjb
caffe: well, not for CL, but for scheme. How many times did you use the scheke linux loadable module?
2:26:11
dandruff
If it's all embarrassingly parallel and they're an intermediary between an OS and the *real* hardware (some AMD or Intel chip), would the system still take a huge hit? pjb: Java is competitive with C, isn't it? Also, Android uses Linux IIRC. What I'd really like is something like Emacs all the way down with orthogonal persistence; I don't want to displace Linux.
2:26:24
aeth
There are two ways to do a LispOS. Bottom up and top down. By bottom up I mean start with the kernel and build your way up. By top down I mean start with user space applications and eventually replace intermediate components with Lisp equivalents.
2:26:44
aeth
Starting with LispM hardware isn't even really a viable option for a LispOS at the moment.
2:26:54
caffe
but honestly, i'd rather let linux kernel devs work for me than try to work against them myself
2:28:05
caffe
the android-like approach would yield better results, and be easier to maintain and keep 'not obsolete' for a longer time
2:28:29
aeth
Yes, Android wisely sidesteps the driver issue that way, even though it's essentially JavaOS
2:31:41
caffe
riding on linux/GNU's back would allow your work to be finished before it becomes obsolete
2:33:56
pjb
dandruff: it's mostly irrelevant if Android uses Linux. Android application developers mostly don't touch unix. The would have to code in C to even know it's there…
2:35:56
aeth
caffe: Well, what I mean is, it doesn't matter whether you call Debian Linux distro or a GNU/Linux distro. But Android or a LispOS that took a similar use-the-Linux-kernel approach isn't "GNU/Linux" because the GNU would be marginalized if present at all
2:36:01
caffe
just nothing that both the kernel, and GNU utilities had very useful pieces you could utilize so you don't have to keep reinventing the wheel for every little thing, and focus more n developing the lisp environment
2:36:55
aeth
Well, you'd probably have to take the GNU approach of incremental replacement. That's how GNU gradually replaced Unix utilities afaik.
2:37:59
aeth
There'd be advantages to having a sh written in CL, but no need to prioritize writing one when bash, zsh, ksh, tcsh, etc., exist.
2:38:17
pjb
Yes, you could start rewriting the GNU tools in Lisp. ls, cp, cat, etc… When you can rename the system from GNU/Linux to Lisp/Linux, you just replace Linux by Mezzano.
2:40:09
aeth
pjb: Mezzano will get really interesting when (1) rich graphical CL applications move from cl-opengl to cl-vulkan and (2) Mezzano implements its own Vulkan backend (considerably easier to support than OpenGL)
2:40:53
dandruff
caffe: if the Linux kernel drivers all use POSIX syscalls, could one build a common API to convert their output to something readable by a Lisp OS? People could then rewrite them natively as the years go on.
2:43:53
aeth
caffe: I suspect it will have similar issues as Lisp machines. Lisp machines went away because they were worse performance than commodity hardware. If running SBCL on the Linux kernel beats your LispOS in performance, it probably won't get many users, either.
2:44:02
caffe
a Lisp OS would have 'purity' going for it, and little else. and probably wouldn't survive long, if even seeing the light of day
2:44:19
dandruff
I'd think so too, but I've heard some things from people who I know are working on neuromorphic chips. I doubt they're going to totally replace von Neumann architectures, but between them and quantum computers and other new hardware, I think we're about to take a turn for the weird
2:44:31
aeth
On the other hand, if a LispOS can have good performance, it could make sense as a lightweight OS for running Lisp applications in the cloud. Drivers wouldn't matter that much, either, if it's running in a VM.
2:46:26
dandruff
a functional language is truly platform-independent because lambda calculus abstractions are so powerful and have a common, tiny core. C's going to die along with Go and Java; if Lisp doesn't make a comeback, then Haskell will eat its lunch. Then we'll have to deal with templates and subverting the type system to get stuff done for who-knows-how-long.
2:47:25
aeth
dandruff: imo Haskell won't win because (1) it makes it hard to write multiparadigm code and (2) it's lazy
2:50:28
dandruff
We're probably going to have computers with many processors with fundamentally different ways of computation, all with different roles and jerry-rigged integration. If something "just works", people will deal with it. I don't know that much about Haskell, but those monads look scary.
2:51:56
aeth
Maybe an FPGA, too. Especially if they can rebrand it to *PU. Perhaps "R" for "Reprogrammable"?
2:52:27
Xach
The only on-topic connection I can tenuously make is Luke Gorrie's work on dedicated hardware running niche language binaries in userspace to do some amazing stuff. And only loosely relevant because he did some common lisp stuff, like an experimental CL tcp stack.
2:54:43
Xach
snabb switch is his work. it's based on luajit. he forked it into raptorjit. he also started teclo networks to do ip acceleration of mobile networks. both not in CL though, which is the topic of this channel.
3:54:29
fouric
stacksmith: any feel for how much effort would be required to make a stack machine perform on par with a register machine?
3:56:24
fouric
I mean, yes, a mid-tier ARM CPU can destroy a stack machine that you build on an FPGA in your free time, but that's because one had a few orders of magnitude more time put into it than the other.
3:57:40
fouric
Do you have informed guesses (you said you've been playing around with stack architectures for a bit) on if one or the other is easier to make better given equal effort?
4:00:13
pierpa
it is not possible to build a stack machine which performs on par with a register machine, unless the register is handicaped in some ways, or there are some constraints you are not mentioning
4:02:43
pierpa
JVM is a stack architecture. To get good performance, most of the work is the destackification of the code :)
4:03:37
Bike
having more registers makes them harder to use, since it takes more bits to address them.
4:04:09
Bike
kind of a "why don't they just make the whole plane out of the blackbox material" question there
4:04:37
fouric
But registers are limited in quantity, too - and have to be swapped out to RAM when you make function calls, right?
4:05:55
pierpa
the need to spill to memory is not different. The possible parallelism in the instruction stream is different.
4:07:23
fouric
I can sort of see that. Something something it's harder to design a CPU that reorders operations on a stack than one that does so with named registers?
4:40:34
aeth
It would probably be more productive to make a RISC-V CPU in a CL DSL (that compiles to Verilog or VHDL?) than to make a specialized, Lisp-oriented CPU.
4:43:52
pierpa
btw, a RISC-V working group about extensions useful for dynamic languages has just been created.
4:45:16
pierpa
https://groups.google.com/a/groups.riscv.org/forum/#!msg/sw-dev/esYoby-4_GU/Ootasrz8AgAJ
4:47:44
pierpa
well, it would be nice to have BOTH safe and fast code at thesame time. Just saying...
4:51:04
beach
I think that is possible with existing CPUs, unless of course you deliberately make the compiler emit unsafe code. And I think that any attempt to make specialized hardware will be so much slower than existing CPUs that the performance hit will be much worse than the additional cost of just emitting safe code on existing CPUs.
4:51:58
beach
Sure, you can dream of Intel or AMD making a processor that is as fast as existing ones but specialized for Lisp. But that aint gonna happen.
4:52:56
pierpa
specialized instructions could give array bounds checking with no performance penalty, for example
4:53:29
bjorkintosh
pierpa, how would you know you're working in lisp if there are no performance penalties??
4:54:00
beach
In my opinion, it is much better to focus on so called "aggressive "compiler optimizations than to dream of specialized processors.
4:55:44
beach
pierpa: If only a single array reference is executed, then it is very likely that other stuff like function calls and such will dominate performance. So it is best to focus on array references in loops. And then we can often get rid of the bounds checking.
4:59:09
beach
OK, here is another "mistake". SBCL treats NIL specially, so that CAR and CDR is a valid operation on NIL without any special test for NIL. But CAR and CDR are important for performance mostly in a loop, traversing the list. And then, the compiler could emit code to check for CONSP first, which will almost always be true.
4:59:10
beach
Unfortunately, SBCL now needs two tests in each iteration, one for LISTP and one for NIL. I think it is way more productive to think about how we organize our Common Lisp systems and what we want the compiler to do.
5:00:13
beach
In fact, for the entire SICL project, I am totally against unsafe code. So I try very hard to make the code both safe and fast.
5:01:01
beach
I have thus invented fast generic-function dispatch, and I have a way of compiling the sequence functions that make them very fast as well.
5:01:32
beach
That kind of work is way more productive than dreaming of influencing major hardware manufacturers.