freenode/#lisp - IRC Chatlog
Search
1:40:52
aeth
Fortunately, most of the probable nickname collisions are libraries that do the exact same thing, probably wrapping the exact same C library.
1:46:19
pfdietz
At this point, I mostly care "does loading this system ruin my lisp session". Purely internal quality is less of a concern.
1:47:46
aeth
Without care, a modest sized application could be using dozens, which also probably means duplicated functionality (e.g. 3 JSON libraries or something)
1:52:16
aeth
pfdietz: It looks like I use "::" twice, to declare a type that's not exported and to fix a performance bug in ECL, i.e. (setf cffi::*cffi-ecl-method* :c/c++)
6:26:51
phoe
I want to write a paper extending CLIM2's idea of protocols and extending your extensions of this idea.
6:27:27
phoe
But with >9000 other papers on software development, engineering, software modularity and software interfaces, I don't think it would be any kind of significant contribution.
6:30:44
beach
Most work in software engineering etc. assumes an object-oriented model of type Java, with single dispatch and methods in classes.
6:31:07
beach
Therefore, some work that is related to generic functions etc could very well be unique.
7:55:35
red-dot
Has anyone seen a good book or reference for format directives? I've read CLHS, PCL and CLtL but still do not really have a good grasp on it. I need to do some character based formatting.
7:57:49
red-dot
Weitz has a few examples too, but I think the real answer is going to be: 'go experiment'.
7:58:36
Shinmera
Okey, well for that you first have to figure out the widths of each column. Format won't be able to do that for you. Once you got that you just print each row using the width limitations of the format directives for each column.
8:01:50
red-dot
That's what I thought, thanks. Am hoping there are some good examples of this somewhere, or a Guide to Format Directives somewhere. At some point this must have been more commonly used, but I suspect such information did not make it over to the Internet, as it probably pre-dates it.
8:02:59
Shinmera
This might not be the best way to do things, but here's an example: https://github.com/Shinmera/trivial-benchmark/blob/master/toolkit.lisp#L9
8:04:29
red-dot
Weitz has a pretty-printer example that might also do the trick. Good thing it is weekend.
8:11:06
jackdaniel
I have a mixed feelings about format which partially overlap with points brought here http://www.cs.yale.edu/homes/dvm/format-stinks.html
8:22:19
red-dot
Well, I will not argue that there should not be a better way. Just like LOOP vs. iterate. 'out' might be worth looking at.
8:24:06
jack_rabbit
When load-testing, I'm getting "Can't handle a new request, too many request threads already"
8:24:37
jack_rabbit
Which makes sense, but I'd prefer those requests to go into a pending queue rather than being dropped altogether.
8:33:58
jack_rabbit
hmm. appears the acceptor has a listen-backlog initarg. But it doesn't appear to eliminate the error message.
8:46:51
Shinmera
Typically when you run out of threads your system is so badly congested that keeping them idle is not going to help and is instead just going to load out your system even worse by taking up FDs
8:48:38
jack_rabbit
Keeping a large queue may be unwise, but a queue is totally reasonable under many circumstances.
8:49:12
jack_rabbit
anyway, it appears that if I want to increase the queue, I need to alter the taskmaster.
8:49:30
jackdaniel
what you want (keeping aside how its implemented in hunchentoot - you may subclass the taskmaster) is lparallel channel with n workers
8:57:37
jack_rabbit
anyway, I manually instantiated a taskmaster with a higher max-accept-count, and everything works as expected now.
8:58:45
jack_rabbit
I wonder, though, if it might be a good patch for the acceptor thread to check (if it can) with the taskmaster to see if the accepted queue is full before accepting, and waiting if it is full.
8:59:09
jack_rabbit
Because I increased the listener queue, but the listener was able to happily keep up with the request rate.
8:59:44
jack_rabbit
It was the accepted queue that was rejecting the connections, and even though it was full, the listener was happy to accept connections, causing them to return 503.
9:18:00
hjudt
is there a standard function that does (cdr (assoc ...)) or does one have to write a macro?
9:21:10
hjudt
but only if you import-from not use alexandria:assoc-value directly because that length sucks a bit
9:21:45
hjudt
of course for this simple function i could also ignore alexandria and write it on my own...
9:22:29
jackdaniel
I personally find multiple-value-bind and destructuring-bind not very pleasent to write
10:20:42
Shinmera
I really wish SETF didn't have the restriction that the value being set has to be returned. It's not unusual that I want to coerce the value being set to something else, which would be more useful to return instead.
10:27:50
scymtym_
phoe: it can make sense to have FOO and (SETF FOO) accept the same set of keyword arguments even if some are ignored in one of the functions: consider (incf (value "counter" :default 5))
10:28:48
phoe
When I setf the decompressed version, this means that the compressed version no longer refers to the decompressed version - they might be different.
10:29:04
phoe
This is why I want to do something like (setf (decompressed-data foo :invalidate-compressed-data-p t) new-data)
10:30:27
Shinmera
Automatically decompress and update the decompressed slot when the compressed slot is set.
10:31:08
phoe
This binary file consists of multiple (up to thousands) of smaller archives concatenated together.
10:31:51
phoe
For efficiency, if I edit one of them, I want only this single one to be recompressed - the other ones can use their original compressed form, the one I originally read from the source file.
10:32:41
Shinmera
Sounds to me more like you should have a data slot and a flag that says whether it's compressed. Or two subclasses, one being compressed and one being not. Then use change-class (and update-instance-for-changed-class) to handle the compression.
10:33:34
Shinmera
Yeah but you synthesise one from the other. When you read or when you save respectively.
10:34:51
phoe
I have an object that holds both compressed and uncompressed variants of the same data.
10:35:44
phoe
So, when I actually decide to save and compress, my code can compress this thing, set the compressed-data slot, and save it to disk.
10:36:13
Shinmera
From my understanding I would do the following: when reading the file read to a list of compressed-data instances. Then when you want to read/write the value, you change-class to decompressed-data first. Finally when you want to save to disk, you change-class them all to compressed-data and write out.
10:36:46
phoe
If I have 300 files and only change one of them, this means I compress 300 files and not 1.
10:37:14
scymtym_
sounds like a state chart, maybe via sublcasses is in order: loaded --decompress--> clean --(setf data)--> dirty
10:39:30
phoe
When I load the file, I first set compressed-data. Then I decompress, and set uncompressed-data.
10:40:42
scymtym_
or, if you really don't like SLOT-VALUE for this, make an internal :accessor %decompressed-data
10:43:21
phoe
And so my custom writer can be reduced to (defmethod (setf data) :after (...) (invalidate-something))
11:33:02
jack_rabbit
can someone describe the mechanism by which setf is allowed to be a function with a list name?
11:34:48
jack_rabbit
It's a special case, and implementations are free to define other setf-like functions.
11:36:27
jack_rabbit
Or I guess, given that description, implementations can even define weirder name structures.