freenode/#lisp - IRC Chatlog
Search
12:06:05
beach
It's debatable though, whether the parameters in the lambda list are considered to be bound as with LET or as with LET*.
12:07:08
beach
But the initialization forms of the others are evaluated in an environment where all previous parameters are in scope.
12:07:47
jackdaniel
according to this: http://www.lispworks.com/documentation/HyperSpec/Body/03_dae.htm it is LET* indeed
12:12:02
beach
But I can't find the place in the Common Lisp HyperSpec now where it says that the init-form of a parameter is in a scope including all previous parameters.
12:14:10
beach
It seems to me that if (defun f (x &key (y (f x))) ...) is allowed, then (defun f (x &key (x (f x))) ...) ought to be allowed as well.
12:15:42
beach
So (defun f (sequence &key (start 0) (end (length sequence)) ...) is certainly allowed.
12:16:49
flip214
but with two 'x eg SBCL says The variable X occurs more than once in the lambda list.
12:18:20
beach
flip214: All I am saying is that, since the init-forms can refer to previous parameters, it smells like a LET* to me.
12:21:28
jackdaniel
you may find this passage relevant: https://common-lisp.net/project/ecl/static/manual/ch05.html#ansi.let-behavior
12:23:17
flip214
perhaps multiple colliding parameter names are so obviously wrong that it isn't written down
12:25:07
beach
flip214: It is obviously wrong for required parameters. Not so obvious for &optional or &key parameters.
12:26:31
jackdaniel
beach: well, not necessarily wrong for required parameters *if* they are ignored
12:26:56
jackdaniel
which were passing functions to other functions which were expected to have a certain lambda list
12:27:14
beach
flip214: And since it doesn't to me, I started this discussion with "It's debatable though".
12:27:58
jackdaniel
but that was not a strong enough clue to withdraw from signalling an error on such occasion (it helps to catch more problematic code)
12:28:40
beach
flip214: Of course, if you find the passage in the Common Lisp HyperSpec supporting your position, it won't be debatable anymore.
12:32:08
flip214
beach: I just wouldn't understand when it would _ever_ be useful to have two input parameters with the same name - one would hide the other!
12:41:40
splittist
flip214: it might make certain types of generated code easier. Or harder, of course (:
12:46:09
ecraven
well, given that actually *writing* anything to disk is much slower than CPU, maybe compressing your data is faster? write less -> write faster
12:47:14
beach
flip214: I don't have a use case for it either, but that could just mean that I am not smart enough to see it. You know, just like the people who don't see a use case for nested functions, multiple dispatch, or first-class packages.
13:56:23
hjudt
performance wise, does it make a difference if i loop over a hash-table with (loop for v being the hash-values of...) or (loop for v in (list-values hash-table)...)?
13:57:29
dlowe
it will be considerably slower and the allocation will cause more garbage collection (though how much is hard to say - probably not much)
13:58:15
dlowe
The loop syntax isn't great for hash tables. I think the ITERATE library is better in this regard.
13:58:45
hjudt
ok, i guessed so. i want to pass either a hash-table or a list as parameter to a function, so it is probably better to use typecase to distinguish between the two cases.
13:59:26
dlowe
yeah any time you want to use typecase is a good indicator that you actually might want a generic function.
14:00:54
hjudt
probably in this case it is better to simply use defun, i've already thought about methods.
14:01:15
dlowe
If performance isn't critical, it is probably best to convert your hashtable to a list, just to avoid duplicating logic.
14:02:06
hjudt
i've put the logic in a local function, so duplicating is only with the two cases of loops (hash-table vs list).
14:05:38
hjudt
actually i could also adapt my list-values function so it returns a list which is only regenerated on changes to the hash-table, which does only occur at a few, predictable places. this might also be a nice improvement i will reconsider...
14:06:54
beach
hjudt: That won't be easy. If you delete an item from the hash table, you may have to traverse the list which takes linear time.
14:07:52
hjudt
beach: that won't be necessary in my use case. actually the hash table is always cleared and rebuild from scratch, nothing will be deleted.
14:09:31
hjudt
dlowe: at the moment, 27854 items. lookup is fast the way i use it, but sometimes i have to search the full ht and that shouldn't be too slow too, or it will take a few minutes instead of a few seconds.
14:11:52
hjudt
an item can have several properties which need to match, so the ht is useless for this except for the lookup.
14:13:07
hjudt
only when i have enough info about an item to identify it exactly, i can do a lookup.
14:17:55
trittweiler
hjudt: that sounds like a "trie" data structure might be more desirable instead. You would traverse the trie property by property
14:20:10
hjudt
trittweiler: thanks, i will look into this, maybe it is easier to handle. for now, the current solution is fast enough and the number of items only increases slowly.
14:55:20
beach
Can someone please read this http://g.oswego.edu/dl/html/malloc.html and help me out with a question? In section "Algorithms", right after the first figure, he writes: "More recent versions omit trailer fields on chunks that are in use by the program". I don't see how that could work. If a block is freed, how can we tell whether the previous block is free as well then?
15:06:51
shka
beach: from what i understand, author says that instead of storing size two times, it can be saved in just one place, at the of the chunk
15:08:14
beach
shka: OK, so you come in with a chunk that has just been freed, and you want to check whether the previous chunk is free too, so that you can coalesce the two.
15:09:41
beach
So you come in with a chunk that has just been freed, and you want to check whether the previous chunk is free too, so that you can coalesce the two.
15:10:18
beach
Normally, the way to do that is to check the size field of the previous chunk, subtract that value to find the beginning and then check whether the status is FREE.
15:10:25
shka
so my undestanding is that you KNOW what is the size of the next chunk and therefore you can find size
15:10:48
beach
So you come in with a chunk that has just been freed, and you want to check whether the PREVIOUS chunk is free too, so that you can coalesce the two.
15:11:19
beach
Normally, the way to do that is to check the size field at the END of the PREVIOUS chunk, subtract that value to find the beginning and then check whether the status is FREE.
15:21:17
beach
Anyway, it is not terribly important. I'll just include the trailing size field always.
15:25:29
minion
jmercouris, memo from pjb: We may need some programmer knowing Cuda (nVidia) and computer vision in a couple of months. I'd prefer a lisper with cl-cuda, rather than a pythonista; there's already enough of them :-) Do you have a linkedIn ?
15:26:25
jmercouris
so, I'm looking at the file, and it looks like this: https://imgur.com/a/araXKef
15:26:42
jmercouris
sometimes where there should be " or other characters, there is a strange symbol, but it is always the same
15:26:45
flip214
beach: in case that's a reassurance, the SBCL developers don't see a usecase for duplicate function parameter names either ;)
15:30:14
jmercouris
the thing is, everytime something like: https://imgur.com/a/araXKef comes up, it is always where quotes should be, or brackets, or some symbol
15:30:52
jmercouris
I think the mistake is on the end of the person providing the file, because regardless of how I open the file, with whatever program, it looks strange
15:41:12
beach
See whether there are any control characters or something like that. Near the place that looks funny.
15:41:50
jmercouris
the problem is the file is around 38gb, so it is quite hard to find the problematic spots
15:42:52
jmercouris
I think this is the closest I've felt to programming in the matrix as a developer
15:45:20
beach
You can try to split the file in two and see whether one of the halves has the same problem. Then you can continue this process until you have a smaller file.
15:46:28
jmercouris
so I ended up running 'od -cX' so I could see hex and the characters at the same time
15:47:20
jmercouris
which leads me to believe that someone who encoded this file, did some translation somewhere somehow with some tool that messed everything up
15:47:54
jmercouris
so I'll have to probably make some regex or some manipulation to clean up these strings
15:52:04
fmsbeekmans
Hi I'm thinking about starting a little book club with my collegues for books that I have been interested for some time. Namely The little Schemer and SICP. Would the former be interesting experienced functional developers?
15:53:00
beach
fmsbeekmans: Well, you are in a Common Lisp only channel, so we would recommend Common Lisp books instead.
16:21:43
jackdaniel
fmsbeekmans: there is also #scheme if you are interested in that particular dialect
16:28:11
fmsbeekmans
jackdaniel: Thanks, doesn't really matter which dialect. I'll have a try there.
16:38:52
jasom
beach: after getting some sleep, you're using a sliding window for the GC? My intuition is that this will make a smaller nursery more useful, as the problem with a classical nursery is that short-lived objects get promoted just because they were recently allocated before a GC, and with a small nursery the fraction of false-positives goes up...
16:42:17
beach
A sliding collector makes it unlikely that a recently allocated object will be promoted if a GC happens right after that allocation.
16:43:18
beach
Another way of putting it is that the sliding collector has a very precise idea of the relative age of objects, so it can promote the oldest and leave the youngest in the nursery.
16:43:44
beach
shka: I understood how it works. I'll draw a nice picture and show it to you at some point.
16:45:07
beach
jasom: The thing is that the sliding collector is seen as too costly in the literature (like the one I just cited), because of the screwy way the break table is built.
16:46:00
jasom
With a pointer-increment nursery, it would seem like it could just be a fraction of the nursery, particularly when you have a separate large-object pool.
17:42:50
beach
Right now, I am working on trying to understand (and explain in the spec) Doug Lea's memory allocator and how I will adapt it for use in SICL.
17:44:57
beach
jasom: But I think you are right. The old problem of references from an old to a young generation will likely pop up if you try that.
17:45:29
jeosol
jmercouris: really? hmmm. Not sure what the competition is doing. Or may be few references so far.
17:46:48
beach
jasom: I am off to spend time with my (admittedly small) family. I'll be back tomorrow morning (UTC+2).
18:08:55
drmeister
The optimization where calls within a compilation unit can be called directly without going through symbol-function - does anyone know where that is described in the CLHS?
18:22:09
jasom
the "inline" declaration implicitly 3 states, with the default state not one that you can declare
18:42:31
scymtym__
there is also (declaim (inline …)) (defun …) (declaim (notinline …)) which can mean "don't inline by default but respect local inline declarations"
18:45:18
Inline
a sequential inline notinline, i would have thought that would be just temporary inlining
18:46:58
Inline
i.e. hop to it's place and turn back where you left off instead of copying and expanding it's code
18:49:09
scymtym__
the initial inline declaration prepares the function for inlining *and* makes inlining the default behavior. the subsequent notinline declaration leaves the function prepared for inlining but changes the default behavior back to not inlining. after that, the function is still prepared for inlining which can be requested, for example, via (locally (declare (inline …)))
19:04:29
pjb
scymtym__: this is wrong. notinline has nothing to do with inlining, and everything to do with notinlining.
19:07:49
pjb
Notably, it does nothing to the function, those declarations only concern the calls to the function, not the function itself.
19:09:48
scymtym__
the sentence that starts with "To define a function f that is not inline by default …" seems to be about the definition
19:09:58
pjb
The idiom in question has the effect of letting the compiler compile some recursive calls as inline calls (which can be done, surprisingly). If you had the notinline before, then this optimization wouldn't be available to the compiler. But that's the only effect of this order.
19:11:26
pjb
The only thing, is that having the inline declaration before the function helps the compiler to book-keep information for inlining.
19:11:50
pjb
notice that the compiler is allowed to inline function calls to functions defined inside the same compilation-unit anyways!
19:12:16
pjb
Therefore this book-keeping stuff is bullshit: the compilers are already allowed to keep the information whatever declaration you put.
19:15:50
pjb
Also, a compiler can compile a function call as being both inline and notinline. Because semantically, the effect of notinline is actually that the function call (foo …) is AS IF (funcall (symbol-function 'foo) …). So the compiler may call the function inline (copy the source of the function in place of the function call), AND keep track of the dependency of the caller to the callee, so when the callee is redefined (setf
19:44:34
p_l
hmm, would taking a tag bit for "this is forwarding pointer" and having possibility of reading it instead of header work?.... hmmm
20:57:36
jmercouris
is it enough to have a top-level statement that invokes a function to load this data from the database and then dump the image?
21:01:29
rpg
jmercouris: probably.... depends a lot on what you want to do with the data. Also, you probably want to tear down the database connection before you dump the image.
21:02:24
jmercouris
which leads me to another question, will having a large hash-table persisted in my image slow down my application start-up time?
21:02:32
rpg
jmercouris: That should be fine then, but notice that you don't want an image with a half-open database connection!
21:03:44
jmercouris
I'm talking about a very large hash table, perhaps one that has around 100,000 entries
21:03:50
rpg
jmercouris: I believe the answer is "no," but it's pretty dependent on implementation, memory, etc. The basic notion is that you are pulling the full image in and that's faster than taking a fasl and doing all the things involved in loading it.
21:04:16
jmercouris
I need my program to start almost instantaneously, and then load this information in memory
21:06:34
rpg
If your users are definitely going to need this table, then there's no reason to load it lazily, and I believe this is the fastest way to load it.
21:06:57
jmercouris
I just would like to give them an opportunity to begin typing before the table is completely loaded into memory
21:08:12
rpg
Then this should be the fastest way to get it loaded. I don't know if there's any way to optimize the memory layout (e.g., to keep the hash table in contiguous memory) before dumping. I'd definitely suggest creating the hash table with a known-big-enough :size argument -- that might improve the initial memory layout.
21:08:41
rpg
I wonder if you can somehow shove it into old memory, since you are never going to want it garbage-collected.
21:17:06
rpg
jmercouris: You don't want the garbage collector to bother looking at this table -- you know it's going to stay around. It might help make your program more efficient to find a way (this would definitely be SBCL-specific) to tell the lisp environment that this is permanent memory so it can be hidden from the garbage collector.
21:27:17
whartung
Yea, mmap. Store it “off heap”, GC never even knows it exists, but you get to handle all the access and marshalling of stuff — which is a pain.
21:28:31
whartung
(and then you think “why not store it in a db and let it cache it anyway, isn’t that what they’re for?”)
22:56:50
didi
Decisions, decisions... should I start using ENDP, instead of NULL, to test for the end of a list?
23:04:40
pjb
whartung: if you use mmap, you may use com.informatimago.common-lisp.heap.heap to store lisp objects in it.
23:07:14
didi
In another news, I recently discovered LOOP's keyword NCONC. It took me long enough. :-P
23:07:23
pjb
Now, it's not native lisp objects, so it may be a draw back, but it allows to share lisp objects (using shared memory) with other implementations, so it may be an advantage.
23:18:54
didi
And a nitpick: LOOP's keywords for hash-tables are confusing. Using [each, the, in, of] makes no difference, but I was trained that using [in, on] with list does.
23:36:58
White_Flame
ugh, yet another (vector (unsigned-byte 8)) vs (simple-array (unsigned-byte 8)) mismatch, this time with cl-sqlite