freenode/#clasp - IRC Chatlog
Search
8:58:53
scymtym
i think they misrepresent "stackful coroutines" (in their terminology) by saying you either have to copy the entire stack or you need a spaghetti stack. direct style implementations of delimited continuations can copy basically as little as a single frame
13:36:04
drmeister
DISASSEMBLE works - but I'm having trouble figuring out how to relocate symbols from the cclasp-boehm executable.
13:38:52
drmeister
This means the difference between where 'nm' says a symbol starts in the cclasp-boehm executable and where it actually starts in memory is +0xF4F4000 bytes
13:41:48
drmeister
I think it's every symbol - so I could look for one symbol, calculate the delta and apply it to the rest.
13:58:58
drmeister
There are 5 (five) different scenarios that need special handling to resolve symbols on macOS/linux, executable/library and JITted.
14:01:52
drmeister
I was able to add annotations to the DISASSEMBLE output for anything that references the literal table - I stick the value in the output. So NIL, T, symbols, numbers, conses etc.
14:09:12
drmeister
There is an annoyance - symbols for intrinsics aren't resolving because they all go through a jump table. Maybe it's because I'm using (useless) PIC code?
14:09:57
drmeister
0x10eaaf6fc <# 66+268> callq "0x111561cd6{_CLASP-CTOR+534}" ## symbol stub for: 0x111561cd6{_CLASP-CTOR+534}
14:11:20
drmeister
disassemble -s 0x111561cd6 --> cclasp-boehm-image.fasl`__stack_chk_fail: 0x111561cd6 <+0>: jmpq *0x39ae94(%rip) ; (void *)0x00007fff67b46a2b: __stack_chk_fail
16:51:27
usha
drmeister, I have been trying to run cracauer/cando image that I pulled today - When I run load-pdb, the cando kernel crashes - it turns black and gets stuck
16:52:59
drmeister
usha: Hi - you pulled the one from today and load-pdb crashes - hang on - I'll give it a try.
16:58:20
drmeister
The fork server complicates our lives but speeds up startup. It's best to shut the docker image down and start it up again.
17:05:53
drmeister
usha: I think there is still a problem with backtraces. Simple errors will cause a backtrace to be generated and if the backtrace generation fails then everything goes down. I've been working on backtraces for a week now.
17:06:19
drmeister
I have to keep going back and forth between linux and macOS to get things to work properly.
17:07:14
drmeister
Give me a couple of minutes and I'll run some tests. This is starting to drive me nuts.
17:16:15
drmeister
usha: Yeah - when trying to generate a backtrace in the docker image within jupyterlab it locks up.
17:16:30
drmeister
Give me a bit of time to crawl in there and try and figure out why that particular case is failing.
17:17:20
usha
i tried another example this time and it went through the loadpdb but seems to get stuck randomly
17:17:55
drmeister
I don't think it's random - I think if you do anything that causes an error it tries to generate a backtrace and locks up.
17:18:49
drmeister
I'm doing some extremely low level and tricky coding to get backtraces with arguments in a compiled language while still maintaining good performance.
17:19:20
drmeister
I have five different situations that I needed to handle and now we are seeing that within jupyterlab it is failing.
17:20:17
beach
I suspect that the tricky part is due to things other than the fact that the code is compiled.
17:37:18
drmeister
I think I have problems with how I'm switching off/on the garbage collector. This is going to take a few hours.
17:39:09
drmeister
I understand. But if it locks up when you make the slightest error - like type in the wrong command - or pass an incorrect argument to a command, or try to read a file with the wrong path - then we need to fix that first before we can diagnose any other problems.
17:39:52
usha
oh ok... to start with I am running the notebooks that I have already output for... so doing nothing new
17:40:39
drmeister
For instance: We haven't touched load-pdb in weeks. There is no reason that should lock up. It worked for me when I loaded a pdb a few minutes ago. You said load-pdb locked up once and not another time. This is all consistent with the problem being the backtrace that is generated when one invokes load-pdb incorrectly.
17:42:16
drmeister
It's also when I have such a huge flaw in the runtime - I have to fix that first before I touch anything else.
17:43:03
drmeister
It's like with a car where there is smoke pouring out of the engine - I need to look at that first before I try to fix the left signal light that is out.
17:47:12
drmeister
I don't want to overstate the problem. I've been working on backtraces for the past week on linux, macOS, jitted/not-jitted, executables/libraries. I thought they are working - but the jupyterlab/docker experience (the hardest to debug) is obviously not working (sigh)
17:52:22
usha
you were right drmeister... one of the pdb files were missing. I did not realize it since I was just running Shiho
18:32:20
drmeister
Yeah - I'm working on the backtraces again - I suspect it was something involving the garbage collector. I need to rearrange a few things.
18:37:24
drmeister
I dunno - but it's hanging when trying to print arguments. My spidey sense starts tingling because I broke my rule #0 of not putting GC pointers in C++ space.
18:38:09
drmeister
So I'm removing the T_sp pointers and I'm going to recover the arguments from the stack once I'm safely back in Common Lisp land.
18:39:41
drmeister
We've reached the point where the C++ compilation time is on par with the Common Lisp compilation time.
18:40:12
drmeister
I just want to stop working on backtraces. Please let me stop working on backtraces.