freenode/#lisp - IRC Chatlog
Search
5:38:46
faheem
I remember when I previously tried using SBCL, most of my issues were with GC. And I bit to do with not being able to figure out how to implement/silence
5:39:39
faheem
Any significant changes in the GC or numerical calculations? I know this is vague, so feel free to ignore.
5:42:21
faheem
I don't think this is the one, but it's related -> https://bugs.launchpad.net/sbcl/+bug/936304
5:46:44
aeth
faheem: I know how to write numerical SBCL. Yes it takes years to master, although it always does.
5:51:39
aeth
This is probably why Lisp lost to C in the era when performance mattered a lot more. That and SBCL didn't exist yet. Then again, lots of people write stuff in Java or even JavaScript now where performance matters...
5:52:29
White_Flame
I think python is the most popular language in the "numerical performance scenario with slowest language" category
5:54:03
aeth
White_Flame: I just enjoy when I'm bottlenecked by one core running at 100% of some (probably pure) Python script. /s
5:56:17
aeth
White_Flame: This came way after Lisp lost, though, which suggests it might just have been too early
5:57:36
aeth
faheem: Learning all of the edge cases where you need to pay attention and where you don't
5:57:41
faheem
Slowless isn't a real issue with SBCL. It's fast enough. But GC weirdness can be an issue.
5:59:00
faheem
Has much changed with the GC situation since 2012/2013? I know there has been a lot of work done.
6:00:12
aeth
I'm not sure. #sbcl probably knows. I have a fairly non-idiomatic style for performance-critical numerical code.
6:01:09
faheem
aeth: Would it be fair to say that SBCL is the way to go for performance-critical numerical work?
6:01:47
aeth
I've gotten fairly good at controlling when allocations happen, which requires discipline that's more than normal (but less than C++). And I can't get that same level of insight on other implementations.
6:03:30
aeth
SBCL gives you structs of arrays (defstructs of specialized arrays). I don't think that the struct slot type is as optimized in other implementations
6:04:19
faheem
aeth: I see. You might be aware that Tomaz Papp wrote a blog post about how he was switching from CL to Julia, citing mostly performance reasons.
6:04:43
aeth
It's not quite manual (unless you involve the FFI, which I sometimes do), but it does take a lot of thinking
6:05:49
aeth
Numerical is basically the same thing but more double-float and less single-float (so harder to avoid boxing)
6:07:40
aeth
I would never use a JIT like Julia because I prefer what you get from an AOT. You do basically have to effectively write statically typed in SBCL, though.
6:08:25
aeth
I guess this means there is necessarily going to be a bit of overhead because now you're pairing dynamically typed (types with the values) with static type declarations (types with the variables) in one thing
6:09:44
aeth
e.g. https://gitlab.com/zombie-raptor/zombie-raptor/blob/master/CONTRIBUTING.md#consing-heap-allocations
6:10:33
aeth
e.g. https://gitlab.com/zombie-raptor/zombie-raptor/blob/061b90122b20dcca292064bca0e7d6d95c76daa3/util/util.lisp#L84
6:12:59
aeth
define-function is almost feature complete. The complementary LET-style macro needs to be finished at some point. That's a lot more elaborate, actually. It also needs its own indentation since FLET-style macros can't really be auto-indented by SLIME properly because it doesn't know enough about their structure
6:13:51
aeth
define-function isn't really special. There are probably 5+ other macros that do something similar, the most popular being the defstar library. What it is, though, is the fanciest.
6:14:04
faheem
aeth: Sounds like it would helpful to have in its own library. Provided the documentation was there too.
6:14:51
aeth
(And defstar is GPLv3, which sort of made me make define-function as elaborate as it is, out of frustration of being forced to reinvent the wheel for my MIT-licensed game engine. If you force me to reinvent the wheel, I will, but it's going to be a better wheel.)
6:22:58
aeth
With 32-bit single-floats would be a chore to work with and I wouldn't be able to do things like use (unsigned-byte 64)s which I can use in SBCL without boxing them if I'm really careful
6:28:16
aeth
I mostly try to run the GC when I don't care, and when using a long running loop I try to avoid creating garbage, though.
6:32:24
aeth
faheem: If you avoid potentially creating garbage at certain parts of your program I don't *think* the GC will run at surprise moments, although it might. Debugging/logging/threading could break this assumption even if it's true.
6:34:53
aeth
You can sort of turn it off by preallocating everything you can and declaring the rest dynamic extent.
6:35:16
dialectic
Running the garbage collector at well known intervals is practically equivalent to turning it off for bouts of execution.
6:35:59
aeth
I think someone here might have said once that the GC will only potentially run at moments of heap allocation? If that's true, then that's easy, you can easily detect allocations in SBCL through lots of ways.
6:36:30
aeth
Avoid allocation in a key area, and then if you're really paranoid (gc :full t) afterward
6:37:08
aeth
no, having manually managed memory myself for CFFI stuff (often through the static-vectors library) that has its own pitfalls and style.
6:37:27
aeth
I mean, I guess you could even just use static-vectors for non CFFI stuff if you really wanted to manually manage those arrays
6:38:30
aeth
I don't think arrays of single-float/double-float/unsigned-byte/signed-byte/bit/etc. are a bit deal for the GC because afaik the GC would just be looking at the metadata at the front and then skipping the whole thing, unlike a T array, where anything could contain a pointer
6:39:06
aeth
If that's the case, then that would suggest having fewer big arrays of double-floats are a good thing to be easy on the GC. Maybe a range of a 1D array, or a row of a 2D array.
6:43:03
aeth
I'd use the fewer, bigger arrays anyway, though, because that would make it easier to preserve type information. e.g. (aref (aref foo 42) 2) is going to lose the information that that's a double-float and (aref foo 42 2) isn't.
6:43:41
aeth
there's not dozens of them and the data isn't the same sort of thing, structs with :type in slots work, at least in SBCL
6:49:08
aeth
faheem: Anyway, I'd say it's not memory management, it's kind of half of memory management. The other half is freeing, or using a with-foo to free, or using finalizers to free, or using a cleanup method that calls other cleanup methods etc to free. And that's a ton more work.
6:50:20
aeth
Although depending on how you implement your preallocation-in-big-arrays (e.g. object pooling) it could basically be a lot more like manual memory management than that.
6:53:32
aeth
All of the terrifying bugs that waste your entire day involve freeing. The allocation stuff will just slow down your code if you mess it up.
7:21:38
faheem
You should definitely try and write up some of it. Even a rough draft would be helpful. On Github or whereever.
7:21:59
faheem
You could ask for comments/feedback. AFAIK there isn't such a thing as a performance guide for CL>
7:32:59
aeth
I think you'd get a lot of people complaining about a performance guide, though, because it's fairly necessarily SBCL-specific, although a lot of it would/could also apply to implementations like CCL and probably will never apply to implementations like CLISP and basically are impossible to apply to JSCL (a JavaScript CL attempt)
7:34:08
aeth
People love portability, although a "performance guide" really just a bunch of workarounds that might just slightly slow things down on other implementations rather than e.g. a guide to SBCL's define-vop like this https://pvk.ca/Blog/2014/08/16/how-to-define-new-intrinsics-in-sbcl/
8:03:02
seok
How is session managed in caveman2 when stored in memory, i.e, do I need to manage destruction to avoid overloading
8:30:30
faheem
aeth: I think if you want performance, portability would have to take a back seat. Though of course portability is desirable.
8:31:17
faheem
Documentation is a big problem, across the board. For much of the free software ecosystem.
8:31:49
faheem
It can be quite frustrating, having these powerful tools and not knowing how to use them.
12:17:56
jmercouris
hi guys, I am having a bit of a mental block here, let's say I have list like this (list 0 1 2 (my-fun 10)), how can I get the evaluated list where (my-fun 10) --> 10
12:22:03
ck_
maybe you want (mapcar (lambda (elt) (if (and (listp elt) (functionp (first elt))) (apply 'funcall elt) elt)) list)
12:23:19
jmercouris
however I want to store the unevaluated, aka quoted from of the list, and then the evaluated form of the list
12:24:29
jmercouris
I want to store the unevaluated form as well because if I change some of the functions, I would like the new output
12:41:46
pjb
(defun my-fun (x) 10) (let ((have-list '(list 0 1 2 (my-fun 10)))) (eval (first (last have-list)))) #| --> 10 |#
12:43:31
pjb
jmercouris: on the other hand, if you want to save both the unevaluated form and the evaluated form, perhaps eval is what you want. Unless you need access to the lexical environment, in which case a macro might be preferable.
12:45:54
pjb
(defmacro save-form-and-value (form stream) (let ((value (gensym)) (vstream (gensym))) `(let ((,value ,form) (,vstream ,stream)) (print ',form ,stream) (print ,value ,stream)))) (save-form-and-value (+ 1 2) *standard-output*) #| (+ 1 2) 3 --> 3 |#
12:46:38
pjb
jmercouris: have a look at DRIBBLE (some implementations write both the form and the values).