libera/#commonlisp - IRC Chatlog
Search
18:35:12
Bike
well if each indexed task can be handled independently you shouldn't really need to lock anything
18:35:29
jeosol
shka: I think I should do same, move to lparallel. I debugged yesterday and was able to isolate the issue to the parallelization as serial model gives the correct results and I noticed the jumbled input files
18:36:05
jeosol
Bike: There are some shared resources (CLOS objects) - this may be where the issues are
18:36:50
_death
jeosol: there's no need for locking if the inputs and outputs are written in separate files, and magic only depends on the inputs.. a naive approach would be to have start a thread for each task and just join them all.. lparallel allows you to limit the number of tasks running at once
18:37:15
jeosol
shka: good point, unfortunately, not all the way through, for instance, I am taking some parameters from a higher level object and saving them in a lower object (iteration and solution indices) - i noticed an issue there
18:38:50
jeosol
_death: ok I see, I am doing something similar but have a loop with bt-threads functionality to track the join, specify a batch-job parameter and all. But I should probably just be using a library if it will save me all the headache
18:39:33
jeosol
ultimately, my goal is to do distributed computing, but I have only used swank-crew to run on another box, for one of computation and it's ok so far
18:41:29
_death
there is also an lparallel for multiple machines, called lfarm.. I've no experience with it though
18:43:19
jeosol
oh really. that'd be interesting - I checked aws but they were expensive. Someone here referred me to some European option. The plan will be to have some machines with the executable for task B installed, and SBCL running, route the jobs, and get the results back
18:46:54
jeosol
_death, Bike, shka, Josh_2, others: Thanks guys, I appreciate the help. I don't want to take over the channel, but I have gotten good pointers to follow up on - should probably stay way from managing the threads creation joining etc and just use lparallel API's
18:49:19
shka
jeosol: sometimes you have to get your hands dirty with BT directly, plus at least sometimes you need to use locks, but yeah, most frequently problems can be solved in lparallel in a few lines of code
18:50:17
jeosol
shka: haha, I think they are dirty enough, lol, I used to use pthreads with C++ code - not an easy experience
18:51:13
jeosol
skha: I handle the thread batching via loops and joining, but I should probably just offload that task and not have to worry much. Yeah, the locks have helped but I should probably redesign to avoid setting variables in thread calls
18:51:48
jeosol
shka: My C/C++ days was back in graduate schools, I have not coded C/C++ since leaving, like 2010
18:52:30
jeosol
shka: thanks. I have a deadline for a conference, and need to have this parallelization part to work to be able to submit the runs
18:54:35
shka
the fun part with lparallel is that you can write (apply (if parallelp #'lparallel:pmap #'cl:map) nil function input-sequence)
18:55:28
shka
which allows you to use the exact same source code for both parallel and serial execution
19:00:33
jeosol
skha: good point: The ability to use the same code for serial and parallel is definitely import. For now, I have a use-parallel-p variable that I use to test both parts. Serial part is trival because I have a loop that calls a function withich chains all the steps I need
19:01:33
jeosol
shka: I definitely need to make my life easier when i comes to executing parallel jobs, so I should consider alternatives
19:03:39
jeosol
shka: by "removing synchronization points ..." you mean avoid writing to shared resource?
19:04:33
jeosol
shka: I agree. I think I will spend some time to redesign so as not to worry about this again
19:05:11
jeosol
I have a part were are save some variables in an object they get jumbled some pointing to previous indices.
19:06:01
shka
and if that does not work, you can use lparallel:future and chain which allows you to link execution steps
19:09:16
jeosol
shka: the call to F(X) is from an upstream higher level algorithm (optimizer) with F(X_i) doing F(A(X_i)), F(B(X_i)), and F(C(X_i)) - so having the main F(X_i) execute without the issues is better. The higher-level algorithm doesn't know about the CLOS object used to compute F(X_i)
19:09:17
shka
jeosol: real world example https://github.com/sirherrbatka/clusters/blob/a0c565a95d66ba025277fca41ec3b3e4c05b1226/source/k-means/utils.lisp#L67
19:09:54
shka
clusters.utils:pmap calls either cl:map or lparallel:pmap depending on the first argument
19:11:47
shka
https://github.com/sirherrbatka/clusters/blob/a0c565a95d66ba025277fca41ec3b3e4c05b1226/source/common/methods.lisp#L85 perhaps something more interesting
19:12:22
jeosol
noticed compiler type instructures "the" and other declare statements -- what is the speed up with and without, I know it probably depends
19:13:57
shka
well, i guess perhaps for arrays as well, since maybe compiler will be able to inline memory access
19:14:26
jeosol
my matrix-vector operations are not optimized and used loops, etc. it was just to check my understanding
19:16:46
jeosol
I normally do computational tasks but for my task B (numerical modeling part) I just use a 3rd party executable. It will take lots of efforts to write a 3D numerical solver and then worry about matrix conditions, optimize computations, etc. The 3rd party exe was written in Fortran
19:21:46
shka
jeosol: check the following (disassemble (lambda (a b) (declare (type double-float a b) (optimize (speed 3) (safety 0))) (* a b)))
19:23:12
Inline
so what does it get you to write the last one as (* (the double-float a) (the double-float b)) ?
19:23:32
shka
jeosol: check the following (disassemble (lambda (a b) (declare (type single-float a b) (optimize (speed 3) (safety 0))) (* a b)))
19:28:03
jeosol
Some neural network training on my desk top didn't finish after several days, other guy said it ran cuda arrays on gpu, in 24 hours - that's a massive speed up
19:31:54
jeosol
I occasionally work with python for the DL libraries, but it can be pain in notebooks having to go up and down to reevaluate some cells come back down, only to realize one other computation is stale, etc
19:32:21
Inline
i see no threading for single-floats but when using double floats there is some threading going on
19:34:34
Bike
a double float needs 64 bits to represent, so there's no room for a type tag. so they're boxed if they need to be used in a type sensitive context. ergo, consing.
19:34:52
Bike
a single float is only 32 bits so it fits with a type tag into a 64 bit word just fine.
19:35:08
shka
yet i vividly remember when i stumbled in something float related in SBCL that worked faster with double-floats then a single floats
19:36:05
Inline
welp, the fastness i also skimmed from books recommending to use bigger types, the machine architectures are then faster because of bus design or so....
19:36:25
jeosol
Bike: I was recently following a talk on Julia why the guy was saying it faster than python and many other languages (don't recall he mentioned C). But one thing he kept saying is that if you have to box and unbox, he said you are "dead"
19:38:25
shka
well, you may work fine with double-floats in sbcl, but you will gonna need type declarations for arrays and use inline quite a bit
19:40:24
jeosol
Btw, on my threading issues mention earlier, is passing data from upstream to downstream object a bad design or code smell - this is where I tell the lower object what their index is that is used to create a directory later on
19:42:16
jeosol
as long as I can map the X_i to some folder and just pick the results, I probably don't need to pass an object (and I do a setf on a slot) this is where things are getting messed up
19:44:34
shka
jeosol: as for index, if that's just iota, well, you can simply do something like (lparallel:pmap nil (lambda (argument index) ...) arguments (alexandria:iota (length arguments)))
19:45:46
Shinmera
shka: Thanks a lot! My finances will stay troublesome as long as I have to fund a team with no income, heh.
19:46:31
shka
Shinmera: btw, did you painted this calendar using your drawning app that name i can't remember right now?
19:47:11
jeosol
skha: technically, the index corresponds to rows of a larger matrix (num_solutions, num_dimensions), so I slice of each row (X_i) to a separate thread
19:49:27
shka
you may prefer to return vector which gets stacked into matrix as an after step but at this point this is really just stylistic choice
19:51:24
jeosol
shka: thanks buddy. I really appreciate the pointers and suggestions. I will try them
20:14:16
jmercouris
Shinmera: let's say I invoke it on the document root, and I want to avoid getting script tags
20:43:57
Guest82
does anyone know how to print to the repl while using a controller (read: web.lisp) in caveman2? phantomics
20:45:29
Guest82
I am too much of a noob to even understand what inferior lisp means... I'm using sublime text with a package called sly
20:49:17
Guest82
jmercouris I'm getting errors, package doesn't exist, tried doing ql:quickload "swank" but then still didn't work
20:51:04
Guest82
oh, meaning doing something like this (format SWANK::*CURRENT-STANDARD-OUTPUT* "hello")
20:53:34
Guest82
pve not sure what that means..... I started the server in the repl... but the repl is probably running as a bash process and being called from the editor sublime text
20:55:49
Guest82
@pve Is there a way to see all the variables in the current script? I tried (inspect *readtable*) but didn't understand the output so much
20:56:49
Guest82
I feel like there's a bit of a jump between starting to program in lisp and understanding how to deal with all these things...
20:57:01
pve
Guest82: Sorry I'm not familiar with caveman2, but you could at least print to a file if you can't see standard-output
20:58:54
pve
Guest82: (with-open-file (*out* "debug.log" :direction :output :if-exists :append) (print *out* "Hello!"))
21:02:31
lisp123_
Guest82: "I feel like there's a bit of a jump between starting to program in lisp and understanding how to deal with all these things..." --> I won't lie, it will take a bit of time, but the benefits will be great down the track
21:04:45
pve
Guest82: it may be that running the thing from within sublime is complicating things too much if you are unable to see any output
21:05:14
Guest82
lisp123_ I find macros are the solution to my problems fighting languages to be able to abstract things, but I feel like there should be a smaller barrier to entry than learning lisp and emacs at the same time
21:05:44
pve
Guest82: You could try starting it manually from the shell instead. Then you should see the debugging output at least.
21:05:56
Guest82
pve I hear, I understand everything is easier with emacs, but I don't know how to use emacs and I feel it's too much to learn both in one shot
21:07:11
pve
Guest82: remember to familiarize yourself with the various command line parameters like --eval and --load to get maximum convenience
21:07:29
lisp123_
Guest82: Try Portacle. I don't disagree with you, the combo of learning lisp and emacs makes it a bit harder, but both tools have a lot of benefits too - so you will get a lot of success down the track :)
21:08:07
lisp123_
Guest82: Sorry to beat a daed horse, but I _highly_ recommend emacs for any sort of lisp programming
21:08:22
Guest82
lisp123_ yeah, from what've I've read it seems it's a great combination. I tried portacle but they didn't support latest mac os
21:08:54
pve
Guest82: and if working from the shell, do define convenience functions or symbol-macros to reload your project quickly
21:11:35
pve
Guest82: silly example: (define-symbol-macro rr (asdf:load-system "myapp")) will make "rr" reload your stuff after you've edited it in sublime
21:24:25
Guest82
lisp123_ what's the easiest tutorial about dealing with lisp on emacs? I feel like the tutrial here is just text editting, and then to do lisp modes and other things is another huge jump
21:25:35
pjb
Guest82: Bug for CL, it's http://cliki.net/Getting+Started and http://cliki.net/Online+Tutorial
21:43:21
pve
Guest82: I like Marco Baringer's SLIME video (https://www.youtube.com/watch?v=NUpAvqa5hQw), you can skip to around the 10:00 mark.
21:46:24
Guest82
Warning (initialization): An error occurred while loading ‘/Users/danielnussenbaum/.emacs.d/init.el’:
21:46:25
Guest82
File is missing: Opening input file, No such file or directory, /Users/danielnussenbaum/zsh:1: command not found: rosconfig
21:50:04
pve
Guest82: I'm rewatching the slime video now, and it's incredible how useful it still is, despite being a little old
22:03:27
kakuhen
the nice part about quicklisp is not only that it's widely used nowadays, but also lets you quickly test your projects on other cl implementations, assuming you added it to your implementation's init file (i.e. your .sbclrc, .eclrc, and so on)
4:39:49
hayley
So, The Art of the Metaobject Protocol turns 30...today or perhaps yesterday depending on time zone.