freenode/#lisp - IRC Chatlog
Search
1:24:30
dandruff
Would the Lisp Machine operating systems have been more portable and able to compete with Unix if they had been built around a Lisp that compiled to VM bytecode like Smalltalk?
1:45:03
caffe
the lisp machine processors don't actually understand lisp natively, it worked more in the sort of fashion you're describing
1:45:32
aeth
dandruff: The thing that killed Lisp Machines was, afaik, performance. The much cheaper commodity hardware eventually beat specialized hardware in Lisp performance.
1:48:18
aeth
krwq: Well, I tried implementing it. I decided it wasn't worth my time to continue along that path because of how hard it is, even compared to directly writing in assembly. It looks like dandruff's link takes a much easier approach. Modify an existing C compiler, and then simply write a Lisp interpreter.
1:49:54
aeth
It was very hard. Strings that aren't NUL-terminated (it's trivial when they are) probably would take a week to write. I never finished that part.
1:52:59
dandruff
caffe: right, I remember hearing that they were stack machines hardware support for bignums and GC. I can't find much on them, though. I haven't even been able to find if they used a monolithic kernel or something else entirely. I've been thinking about writing a Forth implementation in assembly with their features as a compilation target for Lisp Machine Lisp and looking for the source code.
2:09:34
stacksmith
I've spent a lot of time exploring ideas like that (and a few decades of writing Forth-like system for almost every imaginable platform)... It's one of those things like trying to build an interstellar spaceship... A couple of decades of technology will make yours obsolete every time.
2:10:53
stacksmith
Stack machines are dead - register architectures beat them every time. It almost made sense a couple of decades ago to make super-simple super-fast Forth machines, but a decent compiler on a mobile phone can outstrip them for an general purpose tasks.
2:11:45
dandruff
caffe: Isn't the spec similar to Common Lisp's? Yikes. Anyone can implement Scheme in a day at most; SBCL couldn't even have been built from the ground up.
2:14:08
dandruff
stacksmith: so that means that it would be impossible to make a reasonably performant system where a stack machine on an FPGA handles the compiled OS language and communicates with a normal CPU?
2:15:22
caffe
dandruff: i'm not sure i'd start from the ground up... more likely taking something like SBCL or GCL and building down
2:16:22
stacksmith
Chuck more has some gigahertz async stack machines, like a hundred of them, but they are so small that they are useless.
2:18:40
stacksmith
Yup. Check out Green Arrays. There is no clock, they run as fast as possible. Generally stack machines have a couple of dozen instructions, and his processors do them all simultaneously while the decoder is figuring out the opcode. Then the one you want gets the results latched.
2:19:57
stacksmith
But, Green Arrays is a couple of hundred async cpus with something like 256 bytes of ram each. Just loading code into them makes me want to go take a bath.
2:23:09
caffe
if you really want to 'compete' with linux... the best way is to not compete with it at all
2:24:45
pjb
caffe: well, not for CL, but for scheme. How many times did you use the scheke linux loadable module?
2:26:11
dandruff
If it's all embarrassingly parallel and they're an intermediary between an OS and the *real* hardware (some AMD or Intel chip), would the system still take a huge hit? pjb: Java is competitive with C, isn't it? Also, Android uses Linux IIRC. What I'd really like is something like Emacs all the way down with orthogonal persistence; I don't want to displace Linux.
2:26:24
aeth
There are two ways to do a LispOS. Bottom up and top down. By bottom up I mean start with the kernel and build your way up. By top down I mean start with user space applications and eventually replace intermediate components with Lisp equivalents.
2:26:44
aeth
Starting with LispM hardware isn't even really a viable option for a LispOS at the moment.
2:26:54
caffe
but honestly, i'd rather let linux kernel devs work for me than try to work against them myself
2:28:05
caffe
the android-like approach would yield better results, and be easier to maintain and keep 'not obsolete' for a longer time
2:28:29
aeth
Yes, Android wisely sidesteps the driver issue that way, even though it's essentially JavaOS
2:31:41
caffe
riding on linux/GNU's back would allow your work to be finished before it becomes obsolete
2:33:56
pjb
dandruff: it's mostly irrelevant if Android uses Linux. Android application developers mostly don't touch unix. The would have to code in C to even know it's there…
2:35:56
aeth
caffe: Well, what I mean is, it doesn't matter whether you call Debian Linux distro or a GNU/Linux distro. But Android or a LispOS that took a similar use-the-Linux-kernel approach isn't "GNU/Linux" because the GNU would be marginalized if present at all
2:36:01
caffe
just nothing that both the kernel, and GNU utilities had very useful pieces you could utilize so you don't have to keep reinventing the wheel for every little thing, and focus more n developing the lisp environment
2:36:55
aeth
Well, you'd probably have to take the GNU approach of incremental replacement. That's how GNU gradually replaced Unix utilities afaik.
2:37:59
aeth
There'd be advantages to having a sh written in CL, but no need to prioritize writing one when bash, zsh, ksh, tcsh, etc., exist.
2:38:17
pjb
Yes, you could start rewriting the GNU tools in Lisp. ls, cp, cat, etc… When you can rename the system from GNU/Linux to Lisp/Linux, you just replace Linux by Mezzano.
2:40:09
aeth
pjb: Mezzano will get really interesting when (1) rich graphical CL applications move from cl-opengl to cl-vulkan and (2) Mezzano implements its own Vulkan backend (considerably easier to support than OpenGL)
2:40:53
dandruff
caffe: if the Linux kernel drivers all use POSIX syscalls, could one build a common API to convert their output to something readable by a Lisp OS? People could then rewrite them natively as the years go on.
2:43:53
aeth
caffe: I suspect it will have similar issues as Lisp machines. Lisp machines went away because they were worse performance than commodity hardware. If running SBCL on the Linux kernel beats your LispOS in performance, it probably won't get many users, either.
2:44:02
caffe
a Lisp OS would have 'purity' going for it, and little else. and probably wouldn't survive long, if even seeing the light of day
2:44:19
dandruff
I'd think so too, but I've heard some things from people who I know are working on neuromorphic chips. I doubt they're going to totally replace von Neumann architectures, but between them and quantum computers and other new hardware, I think we're about to take a turn for the weird
2:44:31
aeth
On the other hand, if a LispOS can have good performance, it could make sense as a lightweight OS for running Lisp applications in the cloud. Drivers wouldn't matter that much, either, if it's running in a VM.
2:46:26
dandruff
a functional language is truly platform-independent because lambda calculus abstractions are so powerful and have a common, tiny core. C's going to die along with Go and Java; if Lisp doesn't make a comeback, then Haskell will eat its lunch. Then we'll have to deal with templates and subverting the type system to get stuff done for who-knows-how-long.
2:47:25
aeth
dandruff: imo Haskell won't win because (1) it makes it hard to write multiparadigm code and (2) it's lazy
2:50:28
dandruff
We're probably going to have computers with many processors with fundamentally different ways of computation, all with different roles and jerry-rigged integration. If something "just works", people will deal with it. I don't know that much about Haskell, but those monads look scary.
2:51:56
aeth
Maybe an FPGA, too. Especially if they can rebrand it to *PU. Perhaps "R" for "Reprogrammable"?
2:52:27
Xach
The only on-topic connection I can tenuously make is Luke Gorrie's work on dedicated hardware running niche language binaries in userspace to do some amazing stuff. And only loosely relevant because he did some common lisp stuff, like an experimental CL tcp stack.
2:54:43
Xach
snabb switch is his work. it's based on luajit. he forked it into raptorjit. he also started teclo networks to do ip acceleration of mobile networks. both not in CL though, which is the topic of this channel.
3:54:29
fouric
stacksmith: any feel for how much effort would be required to make a stack machine perform on par with a register machine?
3:56:24
fouric
I mean, yes, a mid-tier ARM CPU can destroy a stack machine that you build on an FPGA in your free time, but that's because one had a few orders of magnitude more time put into it than the other.
3:57:40
fouric
Do you have informed guesses (you said you've been playing around with stack architectures for a bit) on if one or the other is easier to make better given equal effort?
4:00:13
pierpa
it is not possible to build a stack machine which performs on par with a register machine, unless the register is handicaped in some ways, or there are some constraints you are not mentioning
4:02:43
pierpa
JVM is a stack architecture. To get good performance, most of the work is the destackification of the code :)
4:03:37
Bike
having more registers makes them harder to use, since it takes more bits to address them.
4:04:09
Bike
kind of a "why don't they just make the whole plane out of the blackbox material" question there
4:04:37
fouric
But registers are limited in quantity, too - and have to be swapped out to RAM when you make function calls, right?
4:05:55
pierpa
the need to spill to memory is not different. The possible parallelism in the instruction stream is different.
4:07:23
fouric
I can sort of see that. Something something it's harder to design a CPU that reorders operations on a stack than one that does so with named registers?
4:40:34
aeth
It would probably be more productive to make a RISC-V CPU in a CL DSL (that compiles to Verilog or VHDL?) than to make a specialized, Lisp-oriented CPU.
4:43:52
pierpa
btw, a RISC-V working group about extensions useful for dynamic languages has just been created.
4:45:16
pierpa
https://groups.google.com/a/groups.riscv.org/forum/#!msg/sw-dev/esYoby-4_GU/Ootasrz8AgAJ
4:47:44
pierpa
well, it would be nice to have BOTH safe and fast code at thesame time. Just saying...
4:51:04
beach
I think that is possible with existing CPUs, unless of course you deliberately make the compiler emit unsafe code. And I think that any attempt to make specialized hardware will be so much slower than existing CPUs that the performance hit will be much worse than the additional cost of just emitting safe code on existing CPUs.
4:51:58
beach
Sure, you can dream of Intel or AMD making a processor that is as fast as existing ones but specialized for Lisp. But that aint gonna happen.
4:52:56
pierpa
specialized instructions could give array bounds checking with no performance penalty, for example
4:53:29
bjorkintosh
pierpa, how would you know you're working in lisp if there are no performance penalties??
4:54:00
beach
In my opinion, it is much better to focus on so called "aggressive "compiler optimizations than to dream of specialized processors.
4:55:44
beach
pierpa: If only a single array reference is executed, then it is very likely that other stuff like function calls and such will dominate performance. So it is best to focus on array references in loops. And then we can often get rid of the bounds checking.
4:59:09
beach
OK, here is another "mistake". SBCL treats NIL specially, so that CAR and CDR is a valid operation on NIL without any special test for NIL. But CAR and CDR are important for performance mostly in a loop, traversing the list. And then, the compiler could emit code to check for CONSP first, which will almost always be true.
4:59:10
beach
Unfortunately, SBCL now needs two tests in each iteration, one for LISTP and one for NIL. I think it is way more productive to think about how we organize our Common Lisp systems and what we want the compiler to do.
5:00:13
beach
In fact, for the entire SICL project, I am totally against unsafe code. So I try very hard to make the code both safe and fast.
5:01:01
beach
I have thus invented fast generic-function dispatch, and I have a way of compiling the sequence functions that make them very fast as well.
5:01:32
beach
That kind of work is way more productive than dreaming of influencing major hardware manufacturers.
5:04:06
pierpa
but the hardware manufacturers *are* thinking about these issues. OK, they are thinking about Java, but probably it will benefit us as a side effect.
5:04:10
beach
Another thing is multi-threading. I try to design the system so that locks can be avoided in favor of faster techniques such as CAS. If you design your system without thinking of multi-threading, it won't matter what specialized processor it runs on; it will still be slow.
5:05:47
pillton
beach: Do you really think multi threading is important? I would have thought copying a first class environment would be a better approach.
5:05:53
beach
Oh, right, cores. We are still using stop-the-world garbage collection, simply because existing systems were designed without taking multiple cores into account.
5:06:53
beach
pillton: Yes, I do think it is important to think about multi-threading. Because one part of memory management is common for all threads.
5:08:32
beach
In my opinion, these issues (and many more like them) are way more important than the raw speed of the processor. But, of course, addressing these issues takes a lot of knowledge of compiler design, GC design, synchronization, etc.
5:10:36
beach
I am thinking at least one core could be dedicated to global GC, running concurrently with the mutator threads.
5:10:58
aeth
Do CL native compilers use an assembler that's available (e.g. nasm) or do they do their own assmbling? (if that's the verb?)
5:14:46
beach
Existing assemblers are not adapted to on-the-fly compilation. They assume the batch-compilation style of C-like languages, i.e. reading a source file, parsing it, emitting an object file. All that parsing and file manipulation is wasted effort.
5:31:35
beach
fouric: SBCL has a huge historical baggage in that a large part of it was written before we had things like multi-core processors, concurrent GC techniques, etc. The signals I receive from SBCL maintainers are that no such radical changes in SBCL are possible.
5:35:31
beach
That is why I think it is time to design a new Common Lisp implementation with all these new techniques in mind. And I think it is important to focus on maintainability and portability, so that we can decrease the collective effort of maintaining free Common Lisp implementations.
5:39:50
pierpa
yes, it is sad to see components which could have been written portably but they weren't.
5:40:02
stacksmith
I have a really hard time grokking SBCL's code generator and VOPs... It would be nice to have something straightforward. On the other hand, decades of accrued should not be underestimated...
5:45:23
beach
The other thing is that for the past several decades, great progress has been made in the domain of compiler optimization techniques. We need to read up on those techniques and see which ones can be applied to Common Lisp code, and perhaps how they need to be adapted.
5:45:47
aeth
stacksmith: SBCL is the most optimized CL, but you can write something that's considerably more optimized, if you had the time and/or money
5:46:18
aeth
Optimization's not easy, but it's not like the bar is set to an impossible height at the moment.
5:47:24
stacksmith
You can coax SBCL to produce decent code, but it takes some effort. It took me a while to get used to a single function doing not too much compiling to kilobytes of code...
5:49:53
beach
Isn't that the direct result of wanting safe code without putting in enough declarations?
5:52:09
aeth
I think you could get fast code without declarations for many things if you sacrificed two things: memory and compilation speed.
5:54:04
stacksmith
Have you looked at Self? The polymorphic caches seemed like an interesting idea for accelerating dynamic dispatchin...
5:58:23
aeth
Oh, I forgot, the other complicating factor is that you *can* access private functions, with ::
5:59:51
vtomole
How about compiling CL-LLVM to take advantage of it's optimizing compiler? Is Clasp doing that?
6:00:33
aeth
vtomole: I believe that the standard answer is that CL is just too different from C/C++
6:03:29
aeth
k-hos: CL is a fairly bizarre language, though, with an image-oriented interaction model that is like Smalltalk (and basically no other language?), a strange way to handle errors, etc.
6:04:46
aeth
pierpa: Interestingly, CLs often came bundled with Prologs. I think all of the commercial ones still are.
6:09:40
beach
vtomole: I don't know much about LLVM, but with Cleavir and Clasp we have run into several difficulties, mainly resulting from the combination of nested functions and threads made possible in Common Lisp. This feature makes it hard to obtain a precise control flow, and thereby a precise data flow, and those are required for many optimizations.
6:13:19
beach
vtomole: Oh, and there is another interesting mismatch. Clasp uses C++ exceptions to implement non-local control transfers, which are common in Common Lisp. But LLVM assumes that exceptions are infrequent, so they have not been sufficiently optimized. This fact makes them almost useless for implementing Common Lisp.
7:40:41
jasom
beach: it's more than "insufficiently optimized" they are specifically optimized in favor of low-overhead when not being thrown.
7:41:23
aeth
If I wanted to see how far I could get writing Lisp for x86-64, should I start with writing an assembler, or should I use an existing assembler and only write an assembler when needed?
7:45:01
aeth
Shinmera: This wouldn't be for actual users. This will probably be one of those projects where I'll implement 20% to 50%, learn what I wanted to learn, and leave it incomplete and never upload it to the Internet.
7:46:13
aeth
The most well defined parts of a program are the most interesting parts because they can be rewritten indefinitely.
7:46:15
dtornabene
hey all, I'm curious about macrolet, I've read some code using it, but I guess I don't understand a use case specifically that calls for something like that
7:47:38
Shinmera
dtornabene: Same as flet, keeping things in a local scope because they wouldn't make sense outside of that scope?
7:49:13
aeth
You almost never do, but there's probably a case when you need it. A lot of the standard is like that.
7:49:22
Shinmera
Avoiding the pollution of the global namespace, keeping things semantically tied together to inform the reader that it's, well, local
7:49:22
dtornabene
local functions I get, not so much local macros, I guess I haven't travelled far enough down the path yet
7:49:39
jackdaniel
dtornabene: because you may need it only locally (i.e for one use with dozen of constructs to have avoid code duplication)
7:50:07
jackdaniel
for instance: you define functions for rows and columns and they are different only in x y argument order
7:50:08
Shinmera
I also sometimes allow myself "pleasures" in local macros that I don't in global ones, such as not using gensyms or making them anaphoric in some sense.
7:53:34
dtornabene
i think what took me aback was the original cause for this finding a top-level definition using macrolet in the sbcl sources
7:54:03
Shinmera
I also use macrolets if I just need to do a trivial expansion like here: https://github.com/Shinmera/crypto-shortcuts/blob/master/digests.lisp#L24
7:55:23
dtornabene
https://github.com/sbcl/sbcl/blob/9ee5e0873f5c2e0cbffef4c701222dc902cbdc3c/src/assembly/x86/arith.lisp#L16
7:56:41
Shinmera
Grepping through my sources I found one of my favourite tricks again, ha ha. https://github.com/Shirakumo/trial/blob/master/toolkit.lisp#L380
7:58:22
Shinmera
dtornabene: I mean, that's just me, but I feel like the define-generic-arith-routine could easily be a standard macro definition too.
7:59:14
dtornabene
hahahaha, sweet. that makes me feel better. even though I had no idea macrolet existed before tonight and am glad to have learned about it
8:00:48
dtornabene
which is weird, because I've frayed my copy of seibels book with use. I must not have read that chapter that close
8:03:07
jackdaniel
truth to be told it is not commonly used operator and one could live happily without it. that said it comes handy at times
8:05:29
jackdaniel
you may check out also symbol-macrolet. you could use it for something like (symbol-macrolet ((my-hash (gethash :foo *ht*))) (setf my-hash 10) my-hash)
8:26:09
stacksmith
I think both are quite useful. Without symbol-macrolet you couldn't have with-slots and such. And macrolet becomes very handy for serious macro work.
8:56:23
stacksmith
Is name-char only capable of names like "LATIN_CAPITAL_LETTER_A"? My SBCL doesn't like "A" or 'A
9:21:11
solene
hello, I'm currently thinking about packaging lisp libraries for an operating system but I wonder if any software would benefit from this. I only know stumpwm as a lisp software. Does someone know some lisp software that would be interesting to package in an OS ?
9:23:33
solene
i'm looking for lisp software which could benefit from packaging libraries, to become available as packages. I don't know if I explain well :D
9:24:08
Shinmera-
In my opinion it's wrong to package language libraries with the OS. We already have systems in place that do this, specifically for the language, in a better way than the OS ever could.
9:25:14
Shinmera
Shipping language libraries with the OS only leads to the following: an incomplete, outdated, unmaintained set of libraries that will confuse newcomers.
9:28:50
flip214
Shinmera: I would actually prefer if there was an OS-package-management compatible channel (so, eg. for Debian something like "testing" or "unstable"),
9:29:30
aeth
The only reason to put a library in an OS package manager is if an application that's also in the OS package manager uses those libraries. e.g. If your OS ships with stumpwm, and if stumpwm has any dependencies, then that's where it makes sense.
9:29:31
flip214
and if that kept older versions (like archive.debian.org) as well, it would be much cleaner to install such software.
9:31:11
flip214
ISTR that hunchentoot (or cl-who or ...?) had a function that returns an URI with the current parameters, but with a few overrides?
9:31:17
Shinmera
The problem is Linux has too many distros. It is not feasible to maintain your package in every damn distro out there. And people are gonna run to /you/ for problems with their outdated OS packages.
9:31:47
aeth
I meant that if your distro ships stumpwm in its package manager, then it makes sense for your OS to also ship CLX, a dependency of stumpwm. In that case, the distro's CLX is not for you, the CL programmer. It's for users of the distro's (outdated, but stable) stumpwm.
9:32:17
solene
Shinmera, I understand your opinion, but I think it as pros and cons to ship language lib in the system. At least if the package maintainer does it well, you are make sure the programs will run well because you tested it
9:32:53
jackdaniel
actually some distributions maintain clx package, I have a request from time to time to make a release, because they want to update
9:32:59
Shinmera
I for one would be vehemently opposed to having any of my libraries in any of the linux distro package managers.
9:34:19
solene
at least quicklisp runs as a user and is simple. Installing perl cpan modules in userland is a bit cumbersome in comparison
9:34:59
aeth
Oh, Debian. That would frighten me as a library author. People using code I wrote many years ago?!
9:35:41
Shinmera
We already have this problem: debian ships wildly outdated versions of libraries, and every so often someone thinks they should use them and then stumble in here wondering why nothing works.
9:37:25
Shinmera
Ideally the lisp application would be shipped as a binary anyway, in which case it already has all the libraries in it. So the only thing needing access to libraries is the one making the package
9:37:58
aeth
I don't like it when some distros try to package literally every library on some language's package manager. Libraries from a language package manager (excluding C/C++ if they ever get one) should be strictly for dependencies for applications shipped in the distro package manager. But... there's no real way around it, or else they can't add that application at all.
9:38:35
solene
Shinmera, you can't ship a binary into the package, you have to put the build recipe to create the package, so you need others packages for the dependencies
9:40:27
solene
currently it's what I've done to package stumpwm, the build system download the few libraries to compile the stumpwm binary and it get packaged
9:42:06
Shinmera
Not necessarily. After all, the binary will include a full Lisp, so you can just ask the user to download the sources somewhere and then direct stump to upgrade using those.
9:43:46
Shinmera
In other news, Didier and I are testing the ELS registration in live mode now, so registration should be open today or tomorrow!
9:43:53
aeth
Shinmera: The binary won't necessarily include a full Lisp. Eventually, tree shakers will come.
9:49:22
Shinmera
Plus even with a tree shaker, anything using CLOS will likely pull in the full compiler anyway, so
9:59:26
beach
I personally think we should have a Common Lisp implementation with most of its system code, including the compiler, in a shared library. That way, we have the full functionality of the system, without any complaints about the size of application binaries.
10:03:41
jackdaniel
and all compiled systems (i.e alexandria.fas) are in fact shared objects by default
10:41:36
jmercouris
jackdaniel: I saw the post you made on /r/lisp about thrift, have you tried it perchance?
10:46:38
jmercouris
jackdaniel: so, if my understanding of the project is correct you can call functions/pass data back and forth between any of the supported languages?
10:47:25
jmercouris
jackdaniel: and the calling of these functions must be two separate processes right?
10:48:15
jackdaniel
you can't call a service method if there is no entity which provides the service
10:49:31
jackdaniel
sure, good luck. when we fix the missing bits for CCL and ECL I will submit it to Quicklisp
10:50:39
jmercouris
it's definitely going to be a much less friction approach than me effectively reinventing a high level approach of that in my frontends
10:51:48
jmercouris
jackdaniel: can you explain why someone would choose a different protocol in thrift?
10:52:08
jmercouris
jackdaniel: when I say protocol, I mean this: https://thrift.apache.org/docs/concepts#protocol
10:52:34
jmercouris
jackdaniel: as in, why should the user care what the "transport language" is (xml, json, plain text)?
10:53:05
jackdaniel
transport language is important for things like data send performance (if you call services over the network)
10:55:31
jackdaniel
I think you'll be better served by the actual implementation and Thrift documentation than my faulty memory (and not-so-perfect English)
12:42:32
hajovonta
like (multiple-value-bind (a b) (list 1 2) (list a b)) but it doesn't work because list only returns 1 value and the other will be nil.
12:48:45
jmercouris
what if the values in the list are like this (list "key1" "value1" "key2" "value2") what's the best way to turn that into an easy to access data structure like a hash-table?
12:49:26
Bicyclidine
(loop with result = (make-hash-table :test #'equal) for (key value) on list by #'cddr do (setf (gethash key result) value) finally (return result))
12:50:14
jmercouris
Bicyclidine: I was hoping for some built in, a library I'm using returns json results in that manner
12:51:02
jmercouris
Bicyclidine: Doesn't matter, just means I'd have to make a "utility" function, I wouldn't like embedding that snippet into my codebase
12:51:55
flip214
jmercouris: but if there are not that many entries, a simple list might be faster to process.
12:52:10
flip214
jmercouris: or you could tell your json library to return a hash-table in the first place!
12:52:44
jmercouris
What I currently have is like a (position "key" list) and then do nth +1 on the position of the key I'm looking for
12:53:38
jmercouris
flip214: Not too many, so it is okay, I think it'd be smarter to change the json lib return though, you're right
12:53:54
jmercouris
there is (yason:parse stream :object-as :plist), I'm sure there is :hash-table as well or something
12:55:31
jmercouris
e.g. (yason:parse ...) and my cursor is there, can I jump to the docstring for that function somehow?
12:56:55
jmercouris
sigjuice: That is quite useful, that one I did know about though, imagine something more like a help buffer
12:59:08
jmercouris
yeah, I wish it included a list of "keyword arguments" ... which "can be used to override the parser settings"
12:59:41
jmercouris
so maybe it does make more sense to just jump to source instead of opening a help buffer with the docstring
13:00:13
jmercouris
yeah, I see the following: (check-type *parse-object-as* (member :hash-table :alist :plist) as possibilities
13:00:59
sigjuice
I didn't read far enough to see that %parse was a thing. I saw *parse-object-as* and immediately mmmdotted
13:03:16
sigjuice
also a quick experiment: CL-USER> (with-input-from-string (s "{}") (yason:parse s)) => #<HASH-TABLE :TEST EQUAL :COUNT 0 {1008D960A3}>