libera/#commonlisp - IRC Chatlog
Search
14:57:27
phantomics
flip214: I'm counting the number of cores that are currently busy, the number of cores present is simple to find
15:39:54
beach
Is SBCL still emitting style warnings for slots with the same SYMBOL-NAME but different SYMBOL-PACKAGE, or is it just that I haven't updated my SBCL for some time?
15:58:13
mfiano
beach: In response to my issue, this was implemented in 2.1.9: "finalizing classes with slots with duplicate symbol-names will only emit a warning if either slot name is an exported symbol."
16:03:00
mfiano
Yes, but beach and I explicitly annotate slot names to not coincide with exported symbols.
16:29:28
flip214
phantomics: cores busy _globally_, or with your CL image? Counting active _threads_ is easy... but _cores_ needs scheduler support, eg. reading /proc/self/sched or /proc/self/status every second and building differences
16:32:26
phantomics
flip214: the goal is counting threads busy in the CL image. (lparallel:task-categories-running) returns this information but it builds a vector, and my goal would be to count active threads very often, anytime I might want to put a task in the lparallel channel, so I'd rather not use a method with that much overhead
16:56:26
flip214
phantomics: so, if you have 8 cores but 20 threads running, you'd want a value of 20?
16:57:41
flip214
the scheduler already has a few counters: https://github.com/lmj/lparallel/blob/master/src/kernel/classes.lisp#L68
17:02:22
flip214
phantomics: how about this? https://github.com/lmj/lparallel/blob/master/src/kernel/stealing-scheduler.lisp#L101
17:02:24
phantomics
The issue is that I want to split complex tasks across threads but the efficiency of doing so depends on the available workers
17:03:11
flip214
phantomics: well, if you create your own kernel, you _know_ how many workers are available. Of course, the number of available _cores_ is a harder question....
17:05:22
phantomics
Basically, the issue is that I'm running tasks that can be divided into independent parts, thus can be multithreaded. However, some parts of a task may be more complicated than others, and an individual task may contain threadable parts
17:05:48
phantomics
So if one of the sub-tasks is threadable, I would want to subdivide it and then check whether there are idle workers
17:06:43
phantomics
If there are idle workers, they will get parts of the sub-task to do. If not, I will run the parts of the sub-task synchronously, -but- after each one completes, I will check again whether there are idle workers, and if so, I will start assigning them parts of the sub-task
17:10:06
phantomics
Would the maybe-wake-a-worker function be usable to check whether a worker is idle?
17:12:37
White_Flame
you also need to average multiple samples of this. a single snapshot could happen to be running a bunch of 1 nanosecond jobs simultaneously
17:14:45
phantomics
The consequence of that would simply be that one task segment would get run synchronously, but when the subsequent segments are run the system would see idle workers and assign them the following segments
17:17:26
rendar
the lisp parser uses a stack right? e.g. (+ (+ 2 3) (+ 5 6)) it reads first +, and pushes that to a stack, it evals 2+3 and 5+6 and then eval the first + bringing those results from the stack, right?
17:20:32
_death
read and eval are separate.. an interpreter would first read the whole form, and then evaluate it
17:25:15
_death
it depends on the lisp dialect.. in Common Lisp the evaluation order is left to right, and the + is not evaluated at all
17:26:16
jackdaniel
(+ (+ 2 3) (+ 5 6)) is equivalent to (funcall #'+ (funcall #'+ 2 3) (funcall #'+ 5 6)) ; so in some metasense it may be considered evaluated :)
17:27:01
rendar
i mean, you must pass whatever (b) and (c) returns to a, and in doing that, you must get (b) and (c)
17:27:33
jackdaniel
application - take a function and arguments and call the function with said arguments
17:29:43
jackdaniel
2022 websites are a pile of dynamically changed content with cookie popups, in other words - garbage
17:30:30
rendar
jackdaniel, at least, you don't have to do left and right with the head, like you're watching a tennis match
17:33:13
jackdaniel
most notably it embraces techniques from both scheme and common lisp, so in some sense it is better than studying a single language
17:34:07
rendar
i'm writing a little lisp interpreter in python for learning purposes, and to compute (+ (+ 2 3) (+ 5 6)) i need a stack or something where to put the first + operator, computer 2+3 and 5+6 then compute the a+b with those results..
17:37:11
_death
there's also structure and interpretation of computer programs (SICP).. there you can even see it in the logo (the mutual recursion of eval and apply)
21:55:23
resttime
In SBCL am I correct in thinking that the optimize speed safety etc. are things that create a 'cost' score used to prioritise a VOP (depending on VOP cost) during compile time?
21:57:47
resttime
Wondering here also that when defining a vop with DEFINE-VOP, the generator is supplied a number as a cost too and I dunno how this is calculated since the docstring says that it's estimated
22:05:29
kakuhen
I think the scale is ill-defined. Usually when defining your own VOPs you will set the cost to something like number of assembly instructions you're running, then refine it later if needed
22:11:07
Bike
i don't think the optimize qualities affect the cost? they just change what transformations are run. that was my impression, anyway, i'm not that deep into sbcl
22:12:11
White_Flame
resttime: many of the decisions are relative to each other like (if (> speed size) ...)
22:12:56
Bike
i thought the vop costs were just numbers associated with them, which are used by the generator to determine what to do, but that's after any of the optimize qualities stuff happens
22:13:36
White_Flame
and I just mean comparing the speed/size declarations to each other, the vop costs aren't used in that determination
22:13:52
resttime
Ahhh, so first it's the policies and then generator costs, from the docstring they seem to be :POLICY {:SMALL | :SMALL-SAFE | :FAST | :SAFE | :FAST-SAFE}
22:15:21
resttime
Guess I wonder where those decisions are happening, it's a blackbox to me how the optimize decisions translate to the policy
22:18:46
kakuhen
well, (declare (optimize speed)) will probably use the vop with a :fast-safe policy, if available
22:22:07
resttime
Yeah it actually is exactly that, found the function node-ltn-policy in sbcl/src/compiler/ltn.lisp
0:31:25
asarch
So, if you had a programming language 'x', with basic support for OOP and then you use that basic support to create an "API" to give full OOP feature to that language, that "API" would be a Meta-Object Protocol, right?
0:44:33
reb`
You have a programming language. The MOP defines a set of classes and protocols that represent methods, classes, inheritance ... and you let a developer substitute different classes in order to redefine what a method or class is or how inheritance works.
1:20:03
resttime
Could someone explain this behaviour I'm seeing when trying to define a new VOP in SBCL? https://plaster.tymoon.eu/view/3345#3345
1:21:51
resttime
Whether it's a lambda and a function doesn't seem to use the VOP I'm expecting but a funcall of a lambda does
1:23:35
resttime
I'm just trying to add two fixnums together, and don't understand the addition isn't happening in all three cases
1:43:25
reb`
asarch: No, it's to provide developers with a way to modify the behavior of the language.
1:54:17
Bike
resttime: i am not an expert on sbcl internals, but i'm reasonably confident that you can't just refer to lisp lexical variables in a vop generator?
1:59:40
Bike
well, add-vop seems to be returning the first argument in the first two cases. maybe you have to do something else to ensure that the result of add actually gets into the result register.