freenode/#lisp - IRC Chatlog
Search
9:10:17
ogamita
minion: memo for jmercouris: sorry, I made a mistake in the commands for sedit. I'll talk to you this evening.
10:35:50
engblom
Do you know about any tiling (dynamically, not manually) written in any lisp dialect (CL, Scheme, Clojure, whatever)?
10:38:20
beach
"StumpWM is a tiling, keyboard driven X11 Window Manager written entirely in Common Lisp."
10:38:20
engblom
beach: It is a manual tiling window manager. When you create a new window, it does not automatically shrink all other windows to dynamically use all of screen space available.
10:38:54
engblom
In the same way, when you close one window, the rest are not growing to use all space.
10:39:19
engblom
Most tiling window managers are dynamic, which means this all happens automatically.
10:45:05
engblom
ogamita: https://github.com/ch11ng/exwm/wiki#issues-caused-by-the-single-threaded-nature-of-emacs
10:46:00
engblom
I was using emacs for IRC for a long time, but I got tired of the complete lockout while emacs was waiting for something to happen.
10:51:39
engblom
ebrasca: To add that feature I would have to put more work into it than what it is worth. Xmonad already got all the features I want. I was just thinking that if I had a wm written in any lisp dialect it would have a bit of smaller foot print. Xmonad requires GHC (the Haskell compilator) and the whole system takes more than a Gb of space.
10:53:06
engblom
beach: It is what I am using and probably I will stay with it if I do not find a smaller alternative.
10:56:18
beach
engblom: A GB of disk space only costs a few cents. Might not be worth worrying about.
11:01:33
no-defun-allowed
SSDs are closer to 30c/gb, and slightly higher still if they use PCIe interconnections.
11:02:39
no-defun-allowed
I only have 120GB of flash storage, of which 40GB aren't used by games, the OS, and LaTeX, roughly in order of storage size.
11:02:59
no-defun-allowed
SSDs are almost a magnitude faster, and are even faster with random reads.
11:04:12
no-defun-allowed
My dad got 16GB of memory a few years back for $100. The single 8GB stick I have costed $80.
11:05:53
no-defun-allowed
The "10GB heap" you suggest in the SICL readme is more memory than most people at my school have.
11:05:54
beach
But many #lisp people are professionals and work for good companies, so I imagine many will have plenty more.
11:06:31
no-defun-allowed
"Whoops, I accidentally ran my parallel Lisp universe simulator on our work servers."
11:07:48
no-defun-allowed
That said, I do think SSDs are often a better performance boost, unless you have to fit your entire dataset in memory for some reason.
11:11:09
no-defun-allowed
In perspective, a good 250gb SSD goes for about US$50, which is as much as a physical copy of SICP, and a decent 1tb SSD is only a bit more than (the book) ANSI Common Lisp. I don't think it's a waste of money.
11:23:39
beach
So for file systems, it seems like you should come up with a bunch of operations that are common to all kinds of file systems, and then for each special version, figure out what is special about it and define additional operations for that.
11:25:04
ebrasca
beach: I have add this https://github.com/ebrasca/Mezzano/blob/master/file/cache.lisp
11:26:46
ebrasca
I have problems with remote.lisp ( https://github.com/ebrasca/Mezzano/blob/master/file/remote.lisp )
11:32:23
beach
The only protocol I see in that code is the Gray streams protocol, which you didn't design.
11:42:00
shka__
ebrasca: sure, so what beach would recommend is to ask yourself what is the common set of operations that MUST be implemented by each of those systems?
11:43:46
shka__
ebrasca: it may be a good idea to make notes and list all functions that allow to "use" file system
11:44:29
shka__
then, extract only those that can't be implemented as combinations of other functions
11:44:59
beach
ebrasca: I take "API design" to mean coming up with a bunch of names and signatures of operations that operate on a set of types. The problems you describe have more to do with implementation techniques, it seems.
11:49:12
scymtym
ACTION has the impression understanding the domain and the requirements should be the first step here. i.e. which concepts are essential, which are accidental (filesystem, file, directory vs. superblock, inode, etc.)? what should a user of the filesystem module be able to do with it?
11:50:40
ebrasca
beach: But if I have different structure for each version , do I need to duplicate other functions?
11:51:59
beach
ebrasca: I agree with scymtym, and it looks like you are just asking for help implementing this thing. Which is fine, but doing that would require everything that scymtym says, which is a lot of work. So I don't think I can give you any hints in just a few minutes of looking at your code. I personally don't have time for such an investment, but others might.
11:54:47
ebrasca
beach: I am thinking to try tu use classes instead of structures and see what come from it.
11:55:43
scymtym
ebrasca: sorry if i confused you. my observation was that you seem to be viewing the problem with the technical realizations of different filesystems as the starting point. my suggestion is, for the API design part, to focus on what somebody using your filesystem module would need from it
11:57:54
schweers
I have a more recent version on my development machine, but there I only use heapsizes of 20000 or so.
11:59:11
schweers
On the other hand, I could try a newer version of SBCL. As far as I know, SBCL can be used without being installed, so I don’t have to mess around in the system.
12:06:18
schweers
shka__: does your problem sometimes come up for no apparent reason? I.e: how do you deal with it?
12:10:15
shka__
well, it happens during garbage collection, and i think that i have reliable way to trigger it (allocate (and retain some) lots of standard instances while asking GC to perform full sweep multiple times in the meantime) but I don't know exactly what is causing it
12:10:47
shka__
it seems to work fine on 8GB heap, but when i crank it up to 80GB things start to break
12:13:14
shka__
i am kinda surprised that you never expirienced this error yourself, googling sbcl gc invariant lost yields plenty of results
12:21:36
schweers
Ah, one problem I do have, which may or may not be my own fault: I have several asdf systems which implement some task or other, which represent different stages of processing. I have a small wrapper application which depends on said systems and runs those stages given on the command line. Sometimes when processing a large dataset, GC fails to reclaim memory after one or more steps have run. When I then simply restart
12:21:36
schweers
stages out which were already complete, it works. As far as I can tell, I don’t hold onto anything which might be large, but as I am not perfect, this may be my own wrongdoing. At any rate, have you experienced something similar?
12:24:47
schweers
I have such a job running, I’ll check if it does indeed report something like your problem
12:54:37
beach
Maybe you don't have to. If you can make it easy to reproduce, that might be good enough.
13:15:13
ogamita
engblom: indeed. I use 3 emacsen: one for gnus, one for erc, and one for development.
13:21:16
ogamita
ebrasca: I would say the most important feature of an API is for it to be "introspective". ie. that you may use it to query and get all the meta-information you need to use it even if you don't know it a-priori. This includes that no API call may crash, whatever the arguments passed or not. Basically, the opposite of FFI.
13:23:52
ogamita
in the case of a "file system" API, one should be able to use the API equaly easily to access any and all file systems, whatever their quirks and idiosyncrasies. (eg. use logical pathnames, not physical pathnames ;-) )
13:26:03
ogamita
schweers: I would assume that when you have a lot of memory the GC would feel much less pressured to collect garbage… So garbage would accumulate.
13:27:06
schweers
That may be the case, but why does SBCL then not run the GC when memory becomes tight? If I remember correctly, SBCL reports that it could not allocate any memory.
13:28:48
ogamita
Ok, the point is that some GC algorithms need temporary memory to work. Probably this temp space is proportional to the garbage (or perhaps the live set). In any case, they may have sized it hard wired for expected heap sizes, not for 10 times that amount. Perhaps you can play with sbcl command line options to leave more space available for this temp space?