freenode/#clasp - IRC Chatlog
Search
17:19:47
Bike
i assume this is supposed to be an actual if, but actually it'll always do the make-pathname thing
17:24:16
drmeister
My problem with the backtrace is the following. Generic function dispatch is handled by (1) a special dispatch function interpreter and (2) a JIT compiler. Neither of them provide good backtrace information. So I need to get clasp running well enough after the stack gets overrun to generate a backtrace.
17:31:54
yitzi
compute-applicable-methods appears to return expected result for both :prefix and :root
17:41:43
Bike
okay, pushed what i think is a fix. since i don't have everything in jupyter up i only tested with a toy example i made.
19:40:51
drmeister
I've been trying to implement a stack guard that would allow us to continue running the debugger if we blow up the stack.
19:41:36
drmeister
I'm starting to think it's a bad idea. It's very complicated and may not give us a robust way of debugging stack overflows.
19:42:22
drmeister
I'm going to write an lldb Python extension to dump backtraces from lldb and from core dumps.
19:43:47
karlosz
i'm going to send a pull request to Concrete-Syntax-Tree which will require changing some slight clasp code because it uses some of the internals of CST
19:44:16
karlosz
Concrete-Syntax-Tree is handled the same way as SICL with respect to how it integrates into clasp, right?
19:45:01
Bike
not our own fork, but that shouldn't be relevant for "oh no, clasp is broken" purposes
19:46:30
drmeister
I started writing this lldb Python extension and I'll continue it. We know the layout of every exposed object - so the lldb extension can interrogate memory .
19:48:03
karlosz
Bike: it's just a fix to reduce computing and consing the exact same 30ish grammar objects on every lambda-list, with some precomputed pruning. not a full parser generator yet.
20:16:08
karlosz
could we merge https://github.com/clasp-developers/clasp/pull/1000 ? i find myself waiting on that big file at the end of every build
22:30:48
karlosz
asdf takes about 2 minutes to compile - flaming the middle of compilation for 30s is informative for what the compiler is doing outside of bitcode writing: ocf.io/~karlos/asdf-30s.svg
22:41:07
drmeister
Bike may be afk - the interpret-ast is an AST interpreter that Bike wrote to interpret some forms rather than compiling and executing them.
22:54:03
yitzi
I am pushing the docker image now. Might be a bit though. I'm on a cable modem so upload is slow.
22:56:02
yitzi
Once that is done I'll generate another version with the tag :nglview for the nglview PR.
23:10:07
karlosz
when *use-ast-interpreter* and *use-cst-eval* are both T - asdf takes 100s to 120s to compile. with *use-ast-interpreter* = NIL and *use-cst-eval* = T, it drops to 70s, matching the flame graph. with both nil, it takes 85s
23:17:49
Bike
which means you have all the time from using hte compiler plus a bunch of pointless analysis time
23:18:42
karlosz
the flame graphs don't point to interpret-ast calling out to the compiler proper though
23:19:19
karlosz
another suspicion i had was that since the ast-interpreter was enabled in may 2019ish as i was fixing the inlining stuff to be much faster
23:19:43
Bike
the ast interpreter doesn't call the compiler, it signals out and then cst-eval calls the compiler
23:20:58
karlosz
can you tell if that's what's happening here? to my unfamiliar eye it does look like actually interpreter processing here: https://www.ocf.berkeley.edu/~karlos/asdf-30s.svg
23:26:55
yitzi
::notify drmeister yitzchak/cando-clj:latest now has latest common-lisp-jupyter on JupyterLab
23:27:17
karlosz
the time to interpret is actually much greater than the time to just compile and execute now
23:27:36
Bike
a problem here is the cannot-interpret check is done with map-ast-depth-first-preorder so that it works with closures, but map ast is expensive. you can see all the time it spends in the GC expanding hash tables
23:28:47
Colleen
drmeister: yitzi said 1 minute, 52 seconds ago: yitzchak/cando-clj:latest now has latest common-lisp-jupyter on JupyterLab
23:29:59
karlosz
but since the full compiler isn't grossly inflated by inlining anymore it's faster than ast-interpreting
23:30:27
Bike
the ast interpreter is basically intended for simple things like some let bindings and function calls, like you'd often get in an eval-when
23:30:35
karlosz
so when you do (setq clasp-cleavir::*use-ast-interpreter nil) and compile asdf, you'll see a big drop in compile speed
23:32:14
Bike
i mean, stupid thing number one is that it does cst->ast, and then if it fails the compiler does cst->ast again
23:33:23
Bike
that could be changed on cleavir's end but maybe clasp's hash tables are especially bad, i don't know
23:33:56
karlosz
so that's why im kinda confused how the compiler is so much faster than the interpreter
23:34:54
karlosz
i mean, the interpreter should always be faster than a compiler to deal with once evaled forms
23:36:51
Bike
because it maps when it hits a function-ast, and with cst->ast you have a shitload of those
23:37:04
Bike
and it's probably redundant, like it'll map through the inner function-asts, even though it doesn't have to
23:39:33
Bike
or the whole ast could just be mapped once at the beginning, but then you lose the ability to interpret some but compile an inner function
23:42:55
karlosz
i think the big problem is as you said with LETs and all these interpolable functions
23:44:13
Bike
ok, i wrote map-local-ast-depth-first-preorder cos it's a one line change, let me try using it in the itnerpreter
23:44:47
karlosz
yeah, you should be able to just test it in a running clasp by using compile-file-serial on asdf
0:45:36
yitzi
::notify drmeister cl-nglview on Jupyter Notebook available at dockerhub tag yitzchak/cando-clj:nglview
0:45:51
Colleen
drmeister: yitzi said 15 seconds ago: cl-nglview on Jupyter Notebook available at dockerhub tag yitzchak/cando-clj:nglview
0:46:42
yitzi
Not all the tabs on nglview work yet. The representation tab doesn't. The rest work *mostly*
0:50:46
yitzi
yitzchak/cando-clj:nglview will track https://github.com/clasp-developers/cl-nglview/pull/2
0:53:41
drmeister
I'd like to build a docker that has exactly what you have in yitzchak/cando-clj:nglview
0:54:32
drmeister
Then I could set up a development environment to test development of widgets to build a user interface.
0:56:31
drmeister
Or I could take your Dockerfile and add the extra stuff to it and give it back to you.
0:58:10
drmeister
Or, create a github repo for the docker file and I can fork it and submit a pull request.
1:35:39
drmeister
If I build it and then add an extra step near the end of the Dockerfile then it picks up building from the change - right?
1:44:18
drmeister
Right - cando is building right now. Then I'll layer on running cando once - to build all the quicklisp and fetching slime and building that once.
1:44:38
drmeister
Then I can run the docker container and connect in to the jupyter server and a slime server.
1:45:17
drmeister
From within cando you can use (start-swank [port=4005]) and start up a swank server.
2:30:27
kpoeck
Looking at the flamegraph, I see only c++ frames - no lisp- and there only cc_unwind and __cxa_throw
2:36:06
drmeister
What we want is flamegraphs to work correctly on linux - that we still struggle with.
2:43:17
kpoeck
If you are on a mac, how long does (time (compile-file "sys:kernel;lsp;generated-encodings.lsp")) take?
2:45:31
drmeister
But I just did (time (compile-file "sys:kernel;lsp;generated-encodings.lsp")) on my linux machine and I get...
2:45:50
drmeister
Time real(3.901 secs) run(3.901 secs) consed(445313840 bytes) interps(249) unwinds(69718)
2:51:36
drmeister
kpoeck: cracauer is offline right now - but I'm updating him in a google hangouts chat.
2:55:40
kpoeck
Now I dumped the information in a file. I read that file at run-time and put all info in a variable (with hashtables of hash-tables)
2:56:33
drmeister
Wait - on the buildbot it will be doing compile-file-serial. I'm using compile-file right now.
2:57:23
kpoeck
The variable is than dumped by the compiler with (setq *encoding-cache* #.*encoding-cache*)
4:00:39
karlosz
kpoeck: yeah, i asked Bike to merge your change because i was tired of seeing that file take so much time
4:01:26
karlosz
it also started hanging for me - i simply kill the process and rebuild and it seems to go through for some reason