freenode/#lisp - IRC Chatlog
Search
10:00:06
p_l
we even have commodity servers running pretty much random software and OSes survive exploding the server
10:00:28
p_l
you might get some hiccups in connections but you probably won't lose the connections themselves
15:30:49
minion
The URL https://gitlab.common-lisp.net/users/sign_in?secret=852de81e will be valid until 15:45 UTC.
18:12:14
jasom
does the lisp standard define what combinations 2-ary multiplies (* A B C D) is equal to, or is any algebraically equivalent calculation valid?
18:14:32
pjb
jasom: the arguments are evaluated from left to right, but the standard doesn't impose an order of association or commutation for the evaluation of the operation per se.
18:15:04
jasom
pjb: that's what I thought; so there is no guarantee e.g. that (= (* A B C D) (* (* (* A B) C) D))
18:15:11
pjb
jasom: so an implementation could do (* (* (* a b) c) d) or (* a (* b (* c d))) or (* (* a d) (* b c)) or anything else.
18:16:01
jasom
at least with log-scaled numbers addition and subtraction are so difficult that you avoid it.
18:16:47
pjb
eg. write (let ((a 42) (b 3.4) (c 5.6) (d 33/2)) (* (* a d) (* b c))) #| --> 13194.721 |#
18:17:20
jasom
interestingly enough, without the IEEE addendum, C permits algebraic rewriting of floating point operations (though most compilers with FP support implement the IEEE addendum).
18:18:08
pjb
That said, I don't know any implementation doing much reordering or sorting the arguments by type. Usually they'll just left associate.
18:18:37
jasom
The nice thing is that good support of ratios allows me to treat FP as an optimization rather than the easiest way to perform fractional calculations.
18:19:18
jasom
hmm, looking in scrollback, devon might be interested in computable-reals which has an arbitrary precision pi.
18:19:51
pjb
Indeed. (let ((a 42) (b 34/10) (c 56/10) (d 33/2)) (* (* a d) (* b c))) #| --> 329868/25 |# (float 329868/25 0.0d0) #| --> 13194.72D0 |#
18:24:55
specbot
Examples of Associativity and Commutativity in Numeric Operations: http://www.lispworks.com/reference/HyperSpec/Body/12_aaaa.htm
18:26:46
jasom
but optimizing across function-call boundaries when the results would differ is disallowed
18:39:15
aeth
pjb, jasom: There is sort of a guarantee... that a quality floating point implementation won't mess with floating point order. I see no reason to use ratios instead of floating point. If an implementation chooses to mess with floating point algorithms, that's on them.
18:40:24
aeth
Bike, jasom: I don't think SBCL does mess with constant for floating point. I remember seeing that they didn't even optimize away multiplication by 0.0f0 in their generated assembly.
18:41:41
aeth
jasom: yes, that's probably why, you can disable floating point traps in most implementations to get the underlying IEEE float with the NANs and INFs
18:41:54
jasom
aeth: the biggest problem with floting point is addition/subtraction are dangerous. That's why I use ratios by default.
18:42:59
aeth
jasom: floating point has many issues, I'd restrict its use in general to implementing known numerical algorithms that other people came up with, if you care about accuracy.
18:44:34
jasom
White_Flame: multiplicative operations will lose at most 1ULP of accuracy, additive operations can leave you with complete garbage.
18:45:27
jasom
White_Flame: e.g. there are multiple FP implementations for the quadradic equation depending on the signedess of the b, 4ac parts.
18:47:16
jasom
sorry, the disciminant (and B, depending on your thoughts of having the sgn function in your equation).
18:48:00
jasom
It's the most well known function that involves catastrophic loss of precision, so it's the canonical example.
18:48:23
jasom
The pythagorean theorum can show some oddities when you are far from the origin as well, which is yet another argument for not using floats for coordinates.
18:49:11
White_Flame
yeah, and I've done some 3d graphics stuff far away from the origin which starts aliasing badly as well
18:49:25
jasom
There are approximately zero cases in which it makes more sense to use a double float than a 64-bit integer for a coordinate system. (doubles made some sense when it was faster to add 2 doubles than 2 64-bit integers).
18:51:14
aeth
If you want performance over accuracy and it's for a game and you're going to be converting to float dozens of times a second...
18:51:41
White_Flame
also, you'll be dealing with fixed point multiplication & reducing from 128-bit results
18:52:29
aeth
jasom: the thing is, in games you want performance, not precision, and in scientific calculations, you can just use a fancier algorithm to get your error small enough since it's inexact anyway
18:53:22
Bike
i think most scientists' engagement with numeric algorithms is ignoring matlab warnings that a matrix is near singular
18:53:32
aeth
(and many of the issues with float are the issues with not using hardware decimal float, which is the fault of hardware makers, not float)
18:54:48
jasom
Bike: more seriously the field is split between those super familiar with the limitations of the computational hardware, and the rest of the field, where the best you can hope is that they use libraries written by the former.
18:54:50
aeth
Bike: well, most of this thought should be moved to a library and not thought about at each use, e.g. a CL implementation of BLAS
18:55:24
Bike
i mean, i've generally been impressed by matlab's treatment. documentation with citations is pretty nice
21:06:27
jasom
Hmm, CLISP's long-float pi value is off by more than one ULP for very large precision values