freenode/#sicl - IRC Chatlog
Search
5:47:06
no-defun-allowed
That gives me an idea. I wrote an essay arguing about how static programming languages and other forms of overoptimisation are probably a waste of time (not news to us Lispers, but I got bored of the static programmers on a website I like reading), but I never figured how much of a waste it is.
6:02:46
no-defun-allowed
So, in this problem, there are two values I'd like to solve for: how many times does this program have to be run before the wasted time is greater than the time the programmer didn't have to take optimising, and at what point does solving the problem with more hardware become more expensive than more programmer time?
6:04:44
no-defun-allowed
The second is harder to solve, as the ratio between processing power and price usually isn't constant.
6:19:58
beach
Sometimes, figures like that can be reduced to a single comparison that a decision maker must decide upon using experience and "gut feeling".
6:20:48
no-defun-allowed
I had a go at making some estimations. For the first problem, if it takes 100 hours to write the program, then another 100 to "optimise" it, making it 25% faster, and user's sessions are 30 minutes long on average, then it would take 800 sessions for the optimisations to pay off in time.
6:22:17
no-defun-allowed
For the second, I arbitrarily chose the difference between the Ryzen 1600 and 1700 processors, as they are about 25% apart in performance, and $40 different in price. If the programmer is paid $30 an hour, then you'd need more than 75 users per programmer for more money to be "wasted" on hardware than on programmer time.
6:23:58
no-defun-allowed
Though, I suppose that ratio isn't as large as I would have liked for an argument. If such a thing happened with SICL, then you would need less than 1,350 users for it to be a waste of time.
6:27:11
beach
I forget the documented ratios of maintenance to initial code, but as I recall, maintenance dominates by an order of magnitude.
6:27:44
beach
This is my endless argument for SICL. I want to avoid special-purpose code as much as possible so as to cut down on maintenance.
8:44:47
no-defun-allowed
Oh, I remember https://xkcd.com/1205/ now, which is a relevant table about how much time you should spend on saving time.
15:59:33
jcowan
beach: In Boston, fixing all leaky pipes would raise the available amount of water by 50%. But who cares? Boston is not short of water, and it is a completely renewable resource there.
16:00:39
jcowan
A possible solution to the idea of removable disks is to use a hybrid 64-bit/128-bit addressing system and give every disk in the word a 64-bit unique id.
16:01:24
jcowan
That way you keep your uniform address space and make it big enough (with a locally reduced version) until we are contacted by the Galactic Empire.
16:02:22
beach
Still, you can't just remove a disk, or you will have pointers pointing to a place that is no longer there.
16:04:04
jcowan
but on disk they were 32 bits. When an object was needed, it was swapped in and assigned a currently-unused 16-bit id.
16:06:13
beach
Paul Wilson is (or was, not sure what he is doing now) a world expert on memory allocation and automatic memory management.
16:06:15
jcowan
The system tried to guess whether the object pointers in the newly swapped-in object should be chased, or whether they should be replaced by 0 (invalid id), to be figured out later.
16:07:37
drmeister
We've been struggling to debug inlining in Cleavir - I have a suggestion and I'd like others thoughts on it.
16:08:20
drmeister
I think a text representation of HIR would be extremely useful. Something that linearizes HIR into a textual representation that is as stable to inlining and optimization as possible.
16:08:28
jcowan
Pointer swizzling is a simpler version of this concept. Here's the paper: http://web.cecs.pdx.edu/~black/OOP/papers/Kaehler%20&%20Krasner-LOOM.pdf
16:09:00
jcowan
The invalid-id concept I mentioned is called lambda, for reasons unexplained. Try to ignore the collision in your mental hash tables this will cause.
16:09:29
drmeister
Something where you could (1) generate a text dump of HIR (2) perform one step of inlining or optimization (3) generate another text dump (4) generate a diff of (1) and (2) output and see only the essential changes made by step 2.
16:13:27
beach
It sounds trivial, but I can't guess whether a normal diff would actually be informative.
16:13:37
drmeister
I would guess we would gather together basic blocks, give them labels that wouldn't change every time a text representation is generated, and order the basic blocks in a systematic top-to-bottom(text)/left-to-right(HIR) manner.
16:14:52
beach
If you think it will be informative, then I think you just have to linearize the HIR code in some systematic manner.
16:15:20
jcowan
Primarily, I'd say, because diff is not hierarchical: it thinks in terms of non-semantic units called "lines".
16:15:31
drmeister
Rather, what would be a better way to see the differences between two HIR graphs?
16:17:44
beach
drmeister: Here is what I think. I think it is trivial to write a HIR-to-text module, so rather than guessing whether diff would be useful on such a thing, it is probably easier just to try it out.
16:18:56
jcowan
For example, when running regressions on my HTML->XML converter, I had it generate a format called PYX (also known as "nsgmls output format") which represents each open-tag, attribute-value pair, text content, close tag, etc. as a separate line so that ordinary diff will work on it.
16:19:27
jcowan
S-expressions could be linearized (in the literal sense of "converted to a sequence of lines") in a similar way.
16:21:12
jcowan
It may be that HIR objects are simple enough, apart from their references to other HIR objects, that they can be (in this sense) linearized by a special-purpose routine, I don't know.
16:21:39
drmeister
If we assign basic-block labels based on the depth from the first instruction then the insertion of a branch will completely change all of the labels.
16:24:57
beach
You want the arc that jumps back to the beginning to be represented as a jump from a position further down the page to further up the page.
16:27:24
beach
Otherwise, if you have cycles in the graph, it can be quite arbitrary which arcs go forward and which ones go backward.
16:28:10
beach
In Common Lisp, most graphs will have "natural loops", so you can compute such a natural ordering.
16:46:22
beach
You could use the "longest path" algorithm that I use in the HIR visualizer. That one is pretty good about getting a consistent layout.