freenode/#lisp - IRC Chatlog
Search
2:12:37
madmuppet006
I am trying to write a mandelbrot set program .. I can create an image but the image is incorrect .. I have tried reversing list of numbers but still get the same image .. can anyone have a quick look? thanks https://pastebin.com/r0mEgvVq
7:54:08
schweers
I have a very simple class, of which many instances are created. Can using a struct instead significantly reduce the memory footprint of these instances? How can I find out the difference (apart from creating loads of instances and comparing how much memory usage the OS sees, that is)?
7:58:23
beach
schweers: Plus, a class instance probably needs some indication of the state the class was in when the instance was created, so that instances can be updated when the class changes.
7:58:28
phoe
make a Lisp image with 10e6 class instances, measure the OS memory footprint; do the same with 10e6 structs, and then with 10e6 structs allocated as vectors
7:59:18
schweers
beach: in case you meant my small that the class has few slots, then yes, that is the case.
8:08:06
schweers
I have another question: has anyone used lparallel? I don’t yet have hard evidence on this, but it seems to me, that a huge amount of memory is not freed, despite calling END-KERNEL.
8:11:51
no-defun-allowed
Maybe having multiple threads creates garbage faster. Also, it may be that checking htop or your favourite process lister might be confusing because your implementation might not free the memory in order to allocate it faster later.
8:12:03
White_Flame
you can manually run a GC (eg (sb-ext:gc :full t)), but also watch out that repl variables like *, **, *** aren't keeping large things alive
8:13:09
schweers
After end-kernel is called, the process quickly ended up in the debugger (a completely unrelated part of the program, which does not use lparallel). Meaning, the lisp process was still running, but not allocating any significant amount of memory. I have a memory consumption graph which shows that memry went up a lot at the time the lparallel using part of the program started, which did not go down afterwards.
8:14:39
schweers
So this is all not hard evidence, it’s just a hunch. I was asking as it was possible that someone else had a similar issue.
8:16:45
no-defun-allowed
Are you using Lisp functions like ROOM to determine memory usage, or external tools like htop or gnome-system-monitor?
8:18:51
no-defun-allowed
It is fairly commonplace for runtimes to only release memory quite a while after the program is finished with it.
8:20:13
loke
I've seen a worrying trend where people don't understand how memory managemnt works and end up with installing Linux without sdwap
8:20:28
schweers
loke: I’d rather fix the root cause of the problem. I could also start up new processes each time.
8:20:56
schweers
The machine doesn’t have lots of swap, but I didn’t commit that particular crime ;)
8:21:14
no-defun-allowed
You should also avoid setting your implementation's heap size larger than the amount of memory you can afford.
8:27:02
no-defun-allowed
dim: I was thinking of how some runtimes refuse to release pages it allocated before, so it doesn't have to acquire new pages when more memory is required.
8:27:32
dim
https://www.kernel.org/doc/Documentation/vm/overcommit-accounting ; set vm.overcommit_memory to 2 (Don't overcommit)
8:28:56
dim
overcommit is useful when applications don't know how to handle malloc() returning NULL (out of memory conditions), which used to be mainly the JVM back in the days
8:29:37
dim
I don't know how SBCL/CCL/ECL/... are dealing with out of memory conditions from the kernel, I would suppose they handle that in a good enough way that you can run them in production with vm.overcommit_memory set to 2
8:30:23
loke
In production you _really_ don't want the oom to kick in, and 2 is the correct number (with plenty of swap)
8:31:06
loke
(note that the swap will mostly not actually be _used_. It just needs to be there to be able to guanratee that even if every singl eprocess touched every single page of memory, there is still going to be neough virtual memory)
8:33:32
schweers
Now that I have a test run complete, I can see that the usage went from 1185M after loading the code, but before running, to 2326M after running and a full gc. ROOM on the other hand reports merely 200M for dynamic space. This looks to me that the lisp process simply has not returned this memory back to the OS, but will use it in future allocations.
8:35:11
p_l
dim: JVM knew quite well how to handle malloc() returning NULL because the system it was written for didn't do overcommit
8:35:35
p_l
Linux is the outlier with overcommit, and most stupid C code that fails without overcommit is due to that
8:36:09
p_l
also, Linux docs are outright lying in terms of how to handle "allocate just virtual space" so I believe some lisp implementations get it slightly wrong
8:36:11
loke
p_l: especially since Java was first written for SOlaris which has sane memory management.
8:38:33
schweers
I know that, I’m just wondering what the libc can do to influence whether or not overcommiting is performed.
8:38:54
loke
If I remember correctly, glibc calls mmap on /dev/zero when it needs more memory pages. Since Linux happily overcommits, you'll always get more pages. glibc malloc will only return null for malloc() if the underlying mmap() call returns NULL
8:41:39
schweers
What else could a libc do to get more memory which can fail even when overcommit is turned on?
8:42:53
White_Flame
I mean that's a situation that could immediately fail even with overcommit, if that's what you're asking
8:47:38
p_l
then you get SIGSEGV (or equivalent) when your code attempts to use memory that can't be delivered
8:48:18
p_l
having control over address space rather than get buffers from malloc() means you get to do nice things like custom layout for memory
8:48:44
p_l
which then makes it easy to use, for example, a simple bump allocator for nursery generation
10:30:56
hjudt
does anyone know of a library that can parse the "cookie:" header line and return a cookie like hunchentoot?
10:35:13
hjudt
i mean something that parses the Cookie: header when i sent it to the server with e.g. wget or curl...
11:33:59
hjudt
actually i am working on the radiance i-woo interface, and woo doesn't provide any cookies. for now, i've implemented it similar to how hunchentoot does (only treating key=value;... in the appropriate header as sent by wget or curl, but without any attributes). i guess it will be fine for now.
11:36:57
hjudt
if i use curl or wget, the appropriate http header is "Cookie:". Not "Set-Cookie". Don't know the difference, but the latter is set by the server iirc.
11:45:01
selwyn
schweers: i have used lparallel a lot over the last year on HPC. i can't remember encountering memory issues
11:48:19
schweers
selwyn: it seems to not be an lparallel issue, but more of general sbcl behaviour. I’m still not entirely sure what to make of it.
11:54:48
selwyn
i know little about memory management, but it is apparently rare to return memory to the OS, as claimed by the authors of a GC that does: https://www.ravenbrook.com/project/mps/
12:00:50
White_Flame
p_l: and of interest to me and maybe you, my first execution of Ivory instructions, dynamically recompiled to CL: https://pastebin.com/sjAVepSP
12:03:21
p_l
White_Flame has a project that had went the furthest regarding reimplementing some of that :)
12:05:33
p_l
VLM is technically official Ivory implementation, just like recent ClearPath mainframes
12:23:44
moldybits
is there some way of treating a vector of vectors so that i can do something to this effect: (aref #(#(a b c)) 0 0) => A
13:26:27
pjb
(reduce (lambda (a b) (print (list a b)) (svref a b)) '(0 1) :initial-value #(#(1 2) #(3 4))) #|
13:28:23
pjb
oh, well, there are reasons why you would choose a vector of vector. For example, they subvectors may be of different sizes.
13:29:51
jdz
pjb: Sure, I can come up with different reasons myself. That's why I asked moldybits, not you.