freenode/#lisp - IRC Chatlog
Search
8:13:09
schweers
After end-kernel is called, the process quickly ended up in the debugger (a completely unrelated part of the program, which does not use lparallel). Meaning, the lisp process was still running, but not allocating any significant amount of memory. I have a memory consumption graph which shows that memry went up a lot at the time the lparallel using part of the program started, which did not go down afterwards.
8:14:39
schweers
So this is all not hard evidence, it’s just a hunch. I was asking as it was possible that someone else had a similar issue.
8:16:45
no-defun-allowed
Are you using Lisp functions like ROOM to determine memory usage, or external tools like htop or gnome-system-monitor?
8:18:51
no-defun-allowed
It is fairly commonplace for runtimes to only release memory quite a while after the program is finished with it.
8:20:13
loke
I've seen a worrying trend where people don't understand how memory managemnt works and end up with installing Linux without sdwap
8:20:28
schweers
loke: I’d rather fix the root cause of the problem. I could also start up new processes each time.
8:20:56
schweers
The machine doesn’t have lots of swap, but I didn’t commit that particular crime ;)
8:21:14
no-defun-allowed
You should also avoid setting your implementation's heap size larger than the amount of memory you can afford.
8:27:02
no-defun-allowed
dim: I was thinking of how some runtimes refuse to release pages it allocated before, so it doesn't have to acquire new pages when more memory is required.
8:27:32
dim
https://www.kernel.org/doc/Documentation/vm/overcommit-accounting ; set vm.overcommit_memory to 2 (Don't overcommit)
8:28:56
dim
overcommit is useful when applications don't know how to handle malloc() returning NULL (out of memory conditions), which used to be mainly the JVM back in the days
8:29:37
dim
I don't know how SBCL/CCL/ECL/... are dealing with out of memory conditions from the kernel, I would suppose they handle that in a good enough way that you can run them in production with vm.overcommit_memory set to 2
8:30:23
loke
In production you _really_ don't want the oom to kick in, and 2 is the correct number (with plenty of swap)
8:31:06
loke
(note that the swap will mostly not actually be _used_. It just needs to be there to be able to guanratee that even if every singl eprocess touched every single page of memory, there is still going to be neough virtual memory)
8:33:32
schweers
Now that I have a test run complete, I can see that the usage went from 1185M after loading the code, but before running, to 2326M after running and a full gc. ROOM on the other hand reports merely 200M for dynamic space. This looks to me that the lisp process simply has not returned this memory back to the OS, but will use it in future allocations.
8:35:11
p_l
dim: JVM knew quite well how to handle malloc() returning NULL because the system it was written for didn't do overcommit
8:35:35
p_l
Linux is the outlier with overcommit, and most stupid C code that fails without overcommit is due to that
8:36:09
p_l
also, Linux docs are outright lying in terms of how to handle "allocate just virtual space" so I believe some lisp implementations get it slightly wrong
8:36:11
loke
p_l: especially since Java was first written for SOlaris which has sane memory management.
8:38:33
schweers
I know that, I’m just wondering what the libc can do to influence whether or not overcommiting is performed.
8:38:54
loke
If I remember correctly, glibc calls mmap on /dev/zero when it needs more memory pages. Since Linux happily overcommits, you'll always get more pages. glibc malloc will only return null for malloc() if the underlying mmap() call returns NULL
8:41:39
schweers
What else could a libc do to get more memory which can fail even when overcommit is turned on?
8:42:53
White_Flame
I mean that's a situation that could immediately fail even with overcommit, if that's what you're asking
8:47:38
p_l
then you get SIGSEGV (or equivalent) when your code attempts to use memory that can't be delivered
8:48:18
p_l
having control over address space rather than get buffers from malloc() means you get to do nice things like custom layout for memory
8:48:44
p_l
which then makes it easy to use, for example, a simple bump allocator for nursery generation
10:30:56
hjudt
does anyone know of a library that can parse the "cookie:" header line and return a cookie like hunchentoot?
10:35:13
hjudt
i mean something that parses the Cookie: header when i sent it to the server with e.g. wget or curl...
11:33:59
hjudt
actually i am working on the radiance i-woo interface, and woo doesn't provide any cookies. for now, i've implemented it similar to how hunchentoot does (only treating key=value;... in the appropriate header as sent by wget or curl, but without any attributes). i guess it will be fine for now.
11:36:57
hjudt
if i use curl or wget, the appropriate http header is "Cookie:". Not "Set-Cookie". Don't know the difference, but the latter is set by the server iirc.
11:45:01
selwyn
schweers: i have used lparallel a lot over the last year on HPC. i can't remember encountering memory issues
11:48:19
schweers
selwyn: it seems to not be an lparallel issue, but more of general sbcl behaviour. I’m still not entirely sure what to make of it.
11:54:48
selwyn
i know little about memory management, but it is apparently rare to return memory to the OS, as claimed by the authors of a GC that does: https://www.ravenbrook.com/project/mps/
12:00:50
White_Flame
p_l: and of interest to me and maybe you, my first execution of Ivory instructions, dynamically recompiled to CL: https://pastebin.com/sjAVepSP
12:03:21
p_l
White_Flame has a project that had went the furthest regarding reimplementing some of that :)
12:05:33
p_l
VLM is technically official Ivory implementation, just like recent ClearPath mainframes
12:23:44
moldybits
is there some way of treating a vector of vectors so that i can do something to this effect: (aref #(#(a b c)) 0 0) => A
13:26:27
pjb
(reduce (lambda (a b) (print (list a b)) (svref a b)) '(0 1) :initial-value #(#(1 2) #(3 4))) #|
13:28:23
pjb
oh, well, there are reasons why you would choose a vector of vector. For example, they subvectors may be of different sizes.
13:29:51
jdz
pjb: Sure, I can come up with different reasons myself. That's why I asked moldybits, not you.
14:05:12
Bike
is that a compile lexenv or a runtime lexenv? like, can it have variable bindings in it, or just macros and stuff?
14:06:42
pfdietz
This was in a response in reddit: https://www.reddit.com/r/Common_Lisp/comments/bgd354/dtrace_an_alternative_to_cls_trace/
14:07:27
Bike
if it can't have variable bindings then it's in cltl2... though support for it is unsurprisingly kind of spotty across implementations
14:10:07
pfdietz
The response was in the context of discussing how to do it and include variable bindings/
14:11:26
Bike
"defsubst defines a function just like defun but also defines a compiler-macro that captures any surrounding non-null lexical environment and inlines the function body in its original lexical environment." wow, what
14:12:30
Bike
based on its use in ensure-compiled-body, i think this must work with whole lexical environments. how about that.
14:26:29
Bike
i've found the idea of a system where you could do (enclose [some function] closure-values...) interesting, and i guess you could use that to implement this, provided you were allowed to deal with the kind of uncompleted function
14:29:17
phoe
Bike: you'd need to have three kinds of functions then: functions that aren't closures at all, as in, they don't depend on their lexenv; "open closures" that require an environment but don't have it yet, and "closed closures" that have been supplied a lexical environment
14:31:41
Bike
of course for just this compiler thing, the second kind only exists within the system, so it's no big deal
14:33:06
phoe
yep - just, if you want to make them first-class objects, you'll want to elevate them into actual user-interactable objects
14:34:50
jackdaniel
beach: in a sense that I can't take the function lexenv at runtime and inquiry "what is X"
14:35:14
Bike
compile time lexenvs are first class, depending on what you mean by "first class", i guess
14:35:25
phoe
they usually only exist at macroexpansion- and compile-time, they are of dynamic extent, and pretty damn hard to reach and interact with
14:35:50
phoe
but they fulfill the notion of first-classness because they can be returned as values, taken as arguments, and operated upon via cltl2 functionality
14:35:58
Bike
letting code deal with runtime lexenvs arbitrarily puts you in perl world and makes compilation impossible, so i'm okay with that not happening
14:36:49
phoe
jackdaniel: that's another reason why I said kind-of-already-are; cltl2 isn't ansi cl, but implementations provide that functionality anyway, so we may as well live with it
14:37:00
jackdaniel
sure, I'm not saying it is not rational, just suggesting that when you need to interact with them at runtime it is problematic
14:37:28
phoe
jackdaniel: yes, it *is* problematic - these object aren't meant to be interacted with at runtime, unless you enjoy poking your fingers into compiler stack frames
14:38:26
phoe
jackdaniel: sure - it's possible that I'm just ignorant about the topic, my phrasing is based on what I've seen so far
14:40:22
Bike
https://franz.com/support/documentation/current/doc/operators/excl/compile-lambda-expr-in-env.htm
14:40:45
jackdaniel
phoe: I think that only cmu, sbcl and ccl (from free implementations) have cltl2 interface implemented
14:41:42
jackdaniel
(and last time I've tested it they had some problems with declarations, I have it noted somewhere to report when I'm done with cltl2 interface implementation for ecl - postponed for after 16.2.0)
14:44:17
jackdaniel
it is also a question of expectations. i.e you may optimize out some variables and have a very suprised programmer that lexenv doesn't have x
14:44:20
pfdietz
It just means keeping track of how you stored lexical variables (and their names), so you can hook new code into that. There's not even a performance penalty if the runtime lexenv is never manifested in the code.
14:46:54
pfdietz
If I am understanding it properly, the CLtL2 lexenv interface does miss one thing I'd want: getting a list of all the lexical variables visible in the lexenv (and lexical functions).
14:48:45
jackdaniel
I think that's true, I've just seen an email on cdr mailing list with mraskin proposal (based on cltl2): https://gitlab.common-lisp.net/mraskin/cdr-walkability/blob/master/walkability.txt
14:53:45
jackdaniel
I've got interrupted at some point with it, but it revealed some issues in cltl2
15:15:42
selwyn
moldybits: the library array-operations at https://github.com/bendudson/array-operations has some nice methods to combine and split arrays of arrays which may give the behaviour you would like
16:54:40
drmeister
How often do people (set-funcallable-instance-function funcinst (let ((x ...)) (lambda ...)))
16:55:45
drmeister
Setting the funcallable instance function to a closure? I may need to slow that down for thread safety
17:32:30
pjb
drmeister: so I would say, not often. It's used with call-next-method in the closure in compute-discriminating-function.
18:21:03
moldybits
jdz: i'm representing a 2d dungeon. i want to be able to pick out rows, as well as rectangle slices easily. it's probably just as easy with arrays, but i lack experience with them
18:24:01
mfiano
Oh it's a procedural dungeon generation library I created and recreated a few times over the years. Sadly the images seem to be down: https://www.michaelfiano.com/projects/gamebox-dgen/
18:28:24
mfiano
The function `TEST` should draw it out using unicode line drawing characters in the REPL.
18:30:59
moldybits
cool. i'm patching this through to decent-username. he's doing the map generation thing :)
18:31:24
mfiano
I use it to create all my dungeons, like https://files.michaelfiano.com/images/screenshots/img-20190205113554.png
18:37:49
moldybits
i've had a miserable time getting started, but this is good practice. next jam i'll be more prepared :)
18:41:32
mfiano
That's the idea, and why it's encouraged to submit something even if you don't finish. It sets the bar for next year, and allows others to use it as a base to build upon
18:45:53
mfiano
Ah, I found a visualization of the data structure it produces: https://files.michaelfiano.com/images/screenshots/img-20190203163111.png
18:46:50
mfiano
dark gray = solid, light gray = corridors, blue = rooms, pink = doorways, and you can just ignore the orange circles
18:48:19
mfiano
and you can tweak it pretty well, such as controlling how windy the corridors between rooms are, and the number of cycles added to the MST
18:50:22
mfiano
The orange dots don't mean much after the dungeon is built. They define the graph for when the MST is computed. Basically, an orange dot represents where a doorway could have been.
18:51:37
moldybits
is there an obvious way to simulate an infinitely sized array (for the purpose of not having to worry about bounds-checking, and defaulting to some value)
18:51:59
mfiano
There are some constraints, such as a connector (orange dot) can't be next to a door (pink), and they must be orthogonally and never diagonally adjacent to grey or blue
18:59:21
mfiano
moldybits: either use a vector with vector-push-extend, or if you require a multiple dimension array, you'll have to do the work with adjust-array manually
19:01:22
moldybits
hm. okay, i'll look into that if there's time. for now i'll just make the dungeon extra large :)
19:01:59
aeth
Depends on what you mean by infinte. You have to be fancier if you're infinite in *both* directions. Probably by having two actual adjustable arrays, one starting at 0 and the other starting at -1 and then convert the minusp indices to (abs (1+ i)) on the second array
19:06:52
aeth
For people who use both structs and standard objects (some people never touch structs), holding thinly wrapped array data structures is a good use for structs imo.
19:27:05
sjl
I've used complexes instead of 2-element vectors for representing 2d coords in games/art/etc before. You get arithmetic functions, eql-ity, reader syntax, etc all for free.
19:41:49
jasom
fun fact, 2 64 bit integers are sufficient to represent any position on the surface of the earth to a precision of 1pm