freenode/#lisp - IRC Chatlog
Search
6:53:25
White_Flame
there have been an OO language or two like that, with transparent persistence and distribution. They're pretty slow, though, and can have nonobvious interactions
6:55:39
ralt
That's also an arguable point of view, because kernels need to be upgraded, hardware rots, etc etc. So, sure, if you want, servers that die are buggy, but bugs are part of life :)
6:58:52
no-defun-allowed
And if things crash and burn pretty hard, I don't see how having multiple servers running the same crashing and burning program is going to help the situation.
7:01:31
beach
ralt: It is not important. I don't have a solution for the case when the hardware breaks. But we shouldn't have to restart our computers when the operating system needs to be updated.
7:01:34
aeth
ralt: re "so the ecosystem is mostly lfarm" I'm not sure there's a big demand for this at all so... maybe?
7:02:20
ralt
beach: ah, I see, you're advocating for a lisp-like kernel with live upgradeable objects
7:02:24
jackdaniel
ralt: even linux kernel experiments with the hotswap kernel update and persistency
7:03:22
White_Flame
ralt: linux is trying to include what other OSes have had years ago, in that regard
7:03:40
jackdaniel
all I'm saying is that bringing: kernel update requires restart is a hoax argument
7:04:48
jackdaniel
if you have operating center distributing all over the world with a single application, magnetic bomb can take it down by turning off electricity
7:05:31
aeth
oh, ouch, that line of products was essentially killed by Itanium, if that Wikipedia article is accurate
7:07:10
aeth
ralt: In general, it's fault-tolerance. https://en.wikipedia.org/wiki/Fault-tolerant_computer_system
7:09:07
aeth
(And then you probably still have to run the program on multiple copies of those computers very far away from each other.)
7:09:40
thijso
"and all the simplicity that comes with it"... I don't think that word means what you think it means, ralt
7:09:57
aeth
ralt: And if you want to know why (almost) no one uses fault tolerant systems, the answer is, of course, money!
7:10:10
guaqua
it's to date always proved to be prohibitively expensive to have programs running forever (whatever your definition of a program is, but probably something like a memory space or object graph). applications and systems consisting of smaller redundant subsystems do work, however
7:11:53
aeth
guaqua: Yes, you need redundant software and redundant hardware. So I'm guessing if something takes p computing power, then it probably is, idk, maybe consuming 5p or 10p in the end...
7:12:55
aeth
White_Flame: Yes, after all of this, there's always a chance that it doesn't work anyway.
7:13:19
White_Flame
I simply mean that in terms of creating the error handling and robustness code, there's a massive onslaught of failure conditions to consider
7:13:37
guaqua
luckily not every line of business is as critical to odd failure. and in my experience many of the things tend to fail somewhat rarely. but this is just one system at one company with one quality culture
7:15:11
guaqua
for some things you need to do the painful engineering to handle all errors, make it 100% robust. for some others a simple manual retry triggered by hand the next morning might be enough
7:24:33
shrdlu68
ralt: Don't the "container orchestration" tools meet your need for high availability?
7:26:20
ralt
Maybe I'm just looking at the wrong problem, i.e. trying to reconcile "let's use lisp live upgrade facilities" and "real world cloud servers die all the time"
7:28:04
White_Flame
but again, live update reduces the need for scheduled maintenance style downtime but doesn't eliminate downtime
7:31:53
White_Flame
no, but even if it were, it doesn't eliminate downtime due to crashes, bugs, user error, hardware failure, etc
7:32:14
shrdlu68
ralt: I'm not convinced there is any advantage to live upgrades versus, for example, rolling upgrades in something like Kubernetes.
7:33:59
jdz
So basically what is needed is to replace all the code, and then move the data to the new codebase (probably fixing it). Exactly what OTP in Erlang does, as far as I know.
7:34:58
White_Flame
what erlang does is continue calling your old code when it's inside a module, and external calls to it call the new version
7:37:46
shrdlu68
Exactly what happens in the stateless "container orchestration" world, only it's a whole "ecosystem" working together rather than a single language's model.
7:53:15
shrdlu68
You still have to deal with the problem at the network layer if you want rolling upgrades and high availability.
8:00:18
shrdlu68
lfarm looks interesting at first blush, but I suspect if one attempted to hack together a "distributed computing" platform based on it on would end up with an ad-hoc, informally-specified...
8:03:10
devon
LOL what's the best way to make a programmatic input stream, e.g., one that produces all the digits of π?
8:04:17
no-defun-allowed
Well, Gray streams are the standard for programmatic streams, but producing the digits of π is still your problem.
8:05:47
White_Flame
or if you want better throughput, fill a buffer with results, and keep a double buffer full
8:09:05
jackdaniel
http://hellsgate.pl/files/af635253-xxx.lisp, but instead of producing all three digits of pi it also returns "." ;)
8:10:54
jackdaniel
but more seriously, sure, I've just hacked something in a repl to illustrate how to make an input-character-stream
8:16:15
jackdaniel
or if you know all three digits beforehand: (with-input-from-string (s #.(write-to-string pi)) (read-char s))
8:18:56
White_Flame
big question is if the stream is infinite, finite but too big to hold, or small enough to represent directly?
8:31:24
devon
Why compute anything, go modern with '(#\3 #\. #\1 #\4 #\1 #\5 #\9 #\2 #\6 #\5 #\3 #\5 #\8 #\9 #\7 #\9 #\3 . #1= (#\2 #\6 #\5 #\0 #\1 #\8 #\9 #\7 #\4 #\3 . #1#))
8:32:31
jackdaniel
there is infinite number of digits there, but I'm not certain they are *the right ones* ;)
9:14:53
flip214
jackdaniel: if you include all digits from 0 to 9, they'll be the right ones for decimal ... just the order might be wrong.
9:25:15
flip214
jackdaniel: for hexadecimal output there's a really easy way to get _any_ specified digit, without needing to calculate all other ones
9:27:49
pjb
ralt: servers don't necessarily crash. Or more precisely, they may crash, but without losing any running process or data.
10:00:06
p_l
we even have commodity servers running pretty much random software and OSes survive exploding the server
10:00:28
p_l
you might get some hiccups in connections but you probably won't lose the connections themselves
15:30:49
minion
The URL https://gitlab.common-lisp.net/users/sign_in?secret=852de81e will be valid until 15:45 UTC.
18:12:14
jasom
does the lisp standard define what combinations 2-ary multiplies (* A B C D) is equal to, or is any algebraically equivalent calculation valid?
18:14:32
pjb
jasom: the arguments are evaluated from left to right, but the standard doesn't impose an order of association or commutation for the evaluation of the operation per se.
18:15:04
jasom
pjb: that's what I thought; so there is no guarantee e.g. that (= (* A B C D) (* (* (* A B) C) D))
18:15:11
pjb
jasom: so an implementation could do (* (* (* a b) c) d) or (* a (* b (* c d))) or (* (* a d) (* b c)) or anything else.
18:16:01
jasom
at least with log-scaled numbers addition and subtraction are so difficult that you avoid it.
18:16:47
pjb
eg. write (let ((a 42) (b 3.4) (c 5.6) (d 33/2)) (* (* a d) (* b c))) #| --> 13194.721 |#
18:17:20
jasom
interestingly enough, without the IEEE addendum, C permits algebraic rewriting of floating point operations (though most compilers with FP support implement the IEEE addendum).
18:18:08
pjb
That said, I don't know any implementation doing much reordering or sorting the arguments by type. Usually they'll just left associate.
18:18:37
jasom
The nice thing is that good support of ratios allows me to treat FP as an optimization rather than the easiest way to perform fractional calculations.
18:19:18
jasom
hmm, looking in scrollback, devon might be interested in computable-reals which has an arbitrary precision pi.
18:19:51
pjb
Indeed. (let ((a 42) (b 34/10) (c 56/10) (d 33/2)) (* (* a d) (* b c))) #| --> 329868/25 |# (float 329868/25 0.0d0) #| --> 13194.72D0 |#
18:24:55
specbot
Examples of Associativity and Commutativity in Numeric Operations: http://www.lispworks.com/reference/HyperSpec/Body/12_aaaa.htm
18:26:46
jasom
but optimizing across function-call boundaries when the results would differ is disallowed
18:39:15
aeth
pjb, jasom: There is sort of a guarantee... that a quality floating point implementation won't mess with floating point order. I see no reason to use ratios instead of floating point. If an implementation chooses to mess with floating point algorithms, that's on them.
18:40:24
aeth
Bike, jasom: I don't think SBCL does mess with constant for floating point. I remember seeing that they didn't even optimize away multiplication by 0.0f0 in their generated assembly.
18:41:41
aeth
jasom: yes, that's probably why, you can disable floating point traps in most implementations to get the underlying IEEE float with the NANs and INFs
18:41:54
jasom
aeth: the biggest problem with floting point is addition/subtraction are dangerous. That's why I use ratios by default.
18:42:59
aeth
jasom: floating point has many issues, I'd restrict its use in general to implementing known numerical algorithms that other people came up with, if you care about accuracy.
18:44:34
jasom
White_Flame: multiplicative operations will lose at most 1ULP of accuracy, additive operations can leave you with complete garbage.
18:45:27
jasom
White_Flame: e.g. there are multiple FP implementations for the quadradic equation depending on the signedess of the b, 4ac parts.
18:47:16
jasom
sorry, the disciminant (and B, depending on your thoughts of having the sgn function in your equation).
18:48:00
jasom
It's the most well known function that involves catastrophic loss of precision, so it's the canonical example.
18:48:23
jasom
The pythagorean theorum can show some oddities when you are far from the origin as well, which is yet another argument for not using floats for coordinates.
18:49:11
White_Flame
yeah, and I've done some 3d graphics stuff far away from the origin which starts aliasing badly as well
18:49:25
jasom
There are approximately zero cases in which it makes more sense to use a double float than a 64-bit integer for a coordinate system. (doubles made some sense when it was faster to add 2 doubles than 2 64-bit integers).