freenode/#sicl - IRC Chatlog
Search
3:51:18
fiddlerwoaroof
There are a couple areas that are occasionally annoying (floating point numbers not lining up nicely with IEEE-754, iirc)
3:51:31
no-defun-allowed
I have a small todo list of things to do to Common Lisp, but they're all implementation details really.
3:51:57
fiddlerwoaroof
But, the language as specified doesn't really need to be re-specified because the most common issues can be solved as libraries
3:52:23
fiddlerwoaroof
And, my attitude towards standardized features is roughly "standards are where features go to ide"
3:53:31
fiddlerwoaroof
The thing I like about SICL, for example, is that it's making standardized features user-extensible (eclector's a good example here)
3:54:15
no-defun-allowed
Specifically, those would be "green threads for async IO without hurting my head on async code" and "vectorising compiler", which are literally supposed to avoid having to modify client code.
3:54:33
fiddlerwoaroof
Anyways, I used to be a python programmer and the Python 2->3 transition has permanently soured my opinion of attempts to change language standards
3:57:23
no-defun-allowed
Oh, that reminds me somehow, Kulukundis's hash table avoids the problem of having excess tombstones from removing mappings...somehow. You only add tombstones if a group is full of mappings, and only empty mappings otherwise.
3:59:39
no-defun-allowed
It's still a bit beyond me, but (assuming an evenly distributed hash function) you'd need a (n-1)/n load for a hash table with n-sized groups to see any tombstones, which is pretty large even at n=8
4:00:35
Bike
i think there are some important improvements that would be more appropriate as a standard than a library
4:00:51
Bike
but all these CL 3.0 or whatever things like to focus on... i didn't even read this one. automatic slot names? who cares
4:01:29
fiddlerwoaroof
That's the other thing, people's idea of improving lisp is "making it more like Python/JS/Ruby"
4:02:56
Bike
i mean like, continuations, sure. whether they're desirable is of course arguable, but that's something you'd need actual implementation support for. you can put that in your updated standard
4:03:06
no-defun-allowed
Re-re-reading that document, the features aren't supposed to make sense together, so my comment on stack-allocation and continuations doesn't really hold.
4:04:55
no-defun-allowed
But if CL went that way, I would begin learning Self (at a faster rate than I am now). At least no one makes dumb READMEs about how to update Smalltalk for the "modern" era.
4:05:01
fiddlerwoaroof
Bike: yeah, although I wish people who wanted that sort of thing focused more on writing languages that compile to CL
4:06:03
fiddlerwoaroof
I recall a long-time smalltalker grumbling on Twitter about Pharo for this reason
4:08:22
no-defun-allowed
I've used Squeak more than I've used Pharo, but surely they consider how to implement the stuff they add in efficient and readable ways.
4:12:31
fe[nl]ix
fiddlerwoaroof: the MOP is full of holes, the whole I/O library is a joke by modern standards, the compiler needs support for first-class compilation environments, there's no explicit memory model, etc...
4:15:19
fiddlerwoaroof
It's more that the value of the improvements is necessarily proportionate to the cost, especially for a relatively small language community
4:16:06
fiddlerwoaroof
(especially when the improvements are backwards-incompatible and break existing code)
4:22:37
Bike
in other news, i think ansi tests for subtypep may be insufficiently strict. i have (subtypep '(cons (satisfies foo)) 'nil) => NIL T and it doesn't complain
4:36:47
Bike
i think ansi tests also didn't cover the bug i reported earlier, though i noticed the problem due to an ansi test i think
4:52:28
beach
fe[nl]ix has some very good points, and they are totally different from the union of arbitrary features that aun has collected on that site.
4:55:03
beach
In fact, I think when a committee designs a language, they are probably using the intersection of what everybody wants rather than the union.
4:56:47
beach
But in general, it's a collection of totally arbitrary stuff that aun found on the net.
4:58:12
fe[nl]ix
but that's because, with 30 years of language advancements at our disposal, CL's warts are all over the place
5:00:02
fe[nl]ix
the lack of extensibility of many parts of CL is because it allows runtime to be relatively lean (by today's standards) and allows compilers to optimize because behaviour is hard-coded
5:00:31
fe[nl]ix
otherwise you either end up with too late binding everywhere, which tends to be quite slow
5:01:31
fe[nl]ix
I feel CL is a good sweetspot between ML and Smalltalk as between lots of early bindings and lots of late binding
5:02:39
fe[nl]ix
OTOH, I *think* I know how to make a smarter language, but I don't have the funds to sustain myself while I spend 5+ years implementing it
5:17:26
beach
fe[nl]ix: Yes, there are a few people with enough experience and knowledge to create an improved standard. And, as you point out, it would take a lot of money to create such a thing. And the work would be very different from collecting arbitrary suggestions from people on the web who don't know anything about language design, and publishing the union of those suggestions.
5:19:45
Bike
i whipped up a caching system for ctype and i'm realizing it could really use a memory model.
6:26:16
pjb
beach: but wouldn't it be possible to develop such a standard using the resources available nowadays, including human resources? There are a lot more programmers available to participate, to implement and test experimental language features. Perhaps we could have a look at the process used for python and its PEP (Python Enhancement Proposals), similar to the CLHS ISSUES I guess, but more numerous and seemingly with more breadth.
7:29:27
beach
I think the latter point is the most important one. Like I have said several times, the point is to specify what most implementations already do.
7:41:43
no-defun-allowed
I thought Bike had a file on GitHub with the document, but I can't find it.
8:43:42
no-defun-allowed
My assumption about SIMD, 128-bit packed values and Lisp was wrong; both the SIMD libraries I found (cl-simd and sb-simd) are able to load and store octowords consisting of multiple elements (16 in the case of an (unsigned-byte 8) metadata vector).
8:57:42
no-defun-allowed
Rather, instead of having (unsigned-byte 128) elements, we load 16 (unsigned-byte 8) elements at a time. And that's how it also works in C with intrinsics, as far as I can tell.
9:17:29
heisig
no-defun-allowed: You had questions about SIMD? Or feature requests? (I still haven't managed to finish any of my SIMD libraries, sorry about that)
9:18:06
no-defun-allowed
Not really, thanks, I was just experimenting with cl-simd today (after hacking it and SBCL sufficiently to get them to appear to work).
10:42:16
no-defun-allowed
The ordering and observability of reads and writes between multiple threads, no?
13:23:40
Bike
https://gist.github.com/Bike/a89cbfda64ace273b12eed8675dda632 here's what i wrote before
13:27:50
Bike
i think most of the chapter definition stuff is ok, but i was thinking of rewriting the operators. maybe remove "NN.5.2" too
13:33:03
Bike
pretty much the only comment i got (that i remember) was that atomic places making read-modify-write operations like push work atomically was too magical
13:34:32
beach
I should read the entire thing from the beginning. This is not something I have been given enough thought.
13:36:52
Bike
oh, and also i decided that CAS should use EQL as its comparator even if that takes more implementation work. that's important. the existing CAS operators are a little more confusing about it
13:43:29
beach
Is that a standard use of "across". To me it sounds like there is some communication.
13:46:02
heisig
Bike: If an implementation can make EQL work as a test for CAS, wouldn't it also be possible to make arbitrary test functions work?
13:47:23
Bike
I mean to make it work if the hardware only gives you EQ you kind of have CAS as another loop
13:51:04
heisig
It would also be nice to have operations for store-load, load-load, store-store, and load-store barriers.
13:53:39
Bike
(cas place old new :test test) => `(loop (let ((val (atomic-read ,place))) (if (funcall test val old) (let ((cas (cas-eq ,place val ,new))) (when (eq cas val) (return cas))) (return val))))
13:56:01
heisig
Looks good. Ideally, the compiler would also optimize the case where the test is EQ, or where old is an immediate. That would be really convenient.
13:56:42
Bike
yeah. i'm not sure if it would have to optimize that for the macroexpansion or what though
14:21:30
beach
Since I have not studied memory models in the past, there are things that seem ambiguous to me. For example, is SYNCHRONIZE-WITH symmetric or not. I.e., is ACQUIRE-LOCK also a synchronization operation?
14:25:10
Bike
well, no, wait, no. it's not synchronous. acquire operations are synchronization operations because they are synchronized with
14:29:05
beach
Shouldn't it be specified that the synchronization operation happens-before (or something like that) the operations that is being synchronized with?
14:29:21
Bike
i mean, certainly i don't mind clarifying things. what you're looking at is the product of me staring at java's JSR 133, some C++ standards, and a couple papers over several days, so i'd like people to not have to experience that
14:29:41
Bike
"An evaluation A happens-before another evaluation B provided that either A is sequenced-before B, A synchronizes-with B,"
14:31:04
Bike
by the way, when i write in italics, that would ideally link to a glossary entry like the CLHS does, which could further clarify
14:33:29
Bike
looking at this again i should definitely put in a more intuition-based explanation before launching into the more formal definitions
14:42:37
beach
I don't understand the paragraph that starts with "All synchronization operations in a program take place in a consistent total order"
14:44:55
Bike
yeah, i definitely need to start with a better explanation. let me try to sketch something out quickly
14:45:51
Bike
"operations" here doesn't mean, like, in the code. If you have a (with-lock ...) somewhere, it can be evaluated zero or more times, and each time it's evaluated constitutes one acquire operation.
14:47:29
Bike
Like intuitively, if you're explaining what with-lock does, what you're saying is that the evaluation of any with-lock form with a given lock does NOT take place simultaneously with any other evaluation of a with-lock form with that lock
14:47:53
Bike
or phrased differently, that any with-lock form for that lock definitely takes place either before or after any other such evaluation
14:48:36
Bike
yeah, sorry. this is all very hard to understand and I think I kind of retreated to formalism.
14:50:42
Bike
"or phrased differently, that any _evaluation of a_ with-lock form for that lock definitely takes place either before or after any other such evaluation"
14:54:32
Bike
and if i needed to program with concurrency more i'd continue being confused as my programs didn't work, i'm sure
14:55:09
beach
By the way, the reason I keep forgetting to read stuff that I started to read is that I am counting on the documents being opened, but then my computer crashes regularly, and I can't remember that I had them open.
14:55:53
beach
Bike: I think this document would be a very valuable addition to the standard. Totally in the spirit of WSCL.
14:57:54
beach
All the others basically said "you tell me what components you want", and that would be a lot of work for me.
15:04:50
beach
But if I can get a good parts list, I can give it to any assembler in the neighborhood.
15:07:56
beach
I want to be able to drive my 3 monitors, but I am not doing gaming, so it doesn't have to be high performance.
15:09:48
shka_
and yes, that integrated GPU can handle 3 displays, i ran such setup at the office for a time
15:13:11
aeth
Well, lots of high-end CPUs don't have integrated GPUs anymore, so plenty of people "need" a GPU even if they're not doing gaming or machine learning.
15:21:30
aeth
Last I checked, AMD's 12 cores seems to be the sweet spot in terms of good per-core performance and having plenty of cores for things that thread well, while not being too overpriced.
15:22:18
aeth
Depending on your high end, since they do go up to 64 cores, and maybe more in a few years.