freenode/#lisp - IRC Chatlog
Search
6:54:13
shrdlu68
"But CL implementations! What big binaries you produce", said the Little Red Riding Hood.
7:00:27
Jachy
beach: was there much overlap in speakers between ELS and <Programming> the previous time they were co-located? I assume lots of attendees of ELS would go to <Programming> while they're there, did the reverse happen too?
7:05:26
jackdaniel
Jachy: unfortunately (imho) things were happening at the same time, so nobody would want to miss ELS to attend programming
7:18:39
aeth
no-defun-allowed: iirc it depends on the format and the direction because it's just a wrapper over other things.
7:22:50
|3b|
opticl has some options for fast, but like most pixel manipulation libs is easy to use slowly :)
7:23:54
|3b|
a large part of png reading time was zlib last i checked, so need to rewrite another lib to improve it much :(
7:25:30
|3b|
ACTION wants a version that can do streaming reads, so i can just start a bunch of threads and ignore some of the speed problems :p
7:26:13
|3b|
while limiting memory usage if i happen to want to load a bunch of 4kx4k images at once or something
7:28:36
jackdaniel
lucky for us it is an opensource library which we can improve to read all of them! :)
7:28:37
no-defun-allowed
it feels slightly faster with a custom ppm writer but i'm Totally Not Biased
7:38:29
no-defun-allowed
i've tried some type definitions and i've got sbcl's (speed 3) down to 3 complaints
7:39:42
no-defun-allowed
i haven't updated cl-vep recently with new stuff but it lives [here](https://gitlab.com/Theemacsshibe/cl-vep)
7:43:23
aeth
asarch: Either you have a sequence of length 9 and by convention treat it as a 3x3 matrix or you have a (3 3) array
7:47:14
aeth
asarch: If you're using SBCL for a lot of 2D array code, SBCL 1.4.10 and later has fixes for bounds checking on 2D arrays
7:48:40
|3b|
use :report :graph (or :type :graph if you call report directly instead of just with-profiling)
7:49:31
|3b|
if you end up using it much, probably also want to install and use the slime slime-sprof contrib, easier to navigate, but harder to paste for other people to look at :)
7:50:31
|3b|
(and you probably want other people to look at it when you start out, not obvious what the various internal things indicate about your code)
7:51:20
no-defun-allowed
i'm fairly sure it's gonna be the image read/write though, i've tuned the effects very carefully
7:53:41
|3b|
also, if you are starting threads inside with-profiling, make sure you wait for them to finish before with-profiling exits
7:55:18
|3b|
ACTION usually tries to profile 5-10sec of work, so process a few images, or do one in a loop or whatever
8:00:11
no-defun-allowed
by the looks of things there are still some array references that didn't get inlined
8:09:01
|3b|
in each block, the line with #s under 'total.' (and name shifted to the left a bit) is the function being measured in that block, above it shows what called it with how much to the left
8:10:12
|3b|
and in first block you can see same info, and that mask-blend is also making full calls to +,-,*
8:12:34
no-defun-allowed
every time i run a parallel-mapvideo, it complains the worked on video is invalid and crashes
8:14:14
no-defun-allowed
also it seems i should have taken a background picture for every clip i masked
8:16:19
shrdlu68
dim: Getting "Failed to connect to pgsql...Can't resolve foreign symbol "SSL_load_error_strings""
8:21:21
no-defun-allowed
also i have to do a beach and set the heap size relatively high to get the worker->writer cache working well enough
8:25:12
|3b|
hmm, actually i guess i've been building with 16gb heap out of 32gb ram lately, i should raise that
8:34:35
shrdlu68
dim: Runnning the dimitri/pgloader container fails with "The alien function "CRYPTO_num_locks" is undefined".
8:34:37
no-defun-allowed
i do need to work on the cache though, and make it use a "maximum pixel count" instead of "maximum frame count"
8:35:27
no-defun-allowed
i also need to work on cl-decentralise but that's not doing any school projects for me
8:37:12
makomo
shrdlu68: i remember having to install an older version of SSL (1.1.0, as opposed to 1.1.1, i think)
8:37:34
makomo
and then there was the problem of cl-plus-ssl not having the correct name of the shared lib in its CFFI defs
8:38:02
shrdlu68
makomo: ..this is in the docker image, shouldn't that bundle up everything it needs?
8:40:02
svillemot
shrdlu68: you may be interested in the patch I wrote to make cl+ssl work with OpenSSL 1.1 https://sources.debian.org/src/cl-plus-ssl/20180328-3/debian/patches/openssl-1.1.patch/
8:42:32
shrdlu68
I'll maybe look at it later, right now I need a pgloader-like thing to complete some task.
8:48:29
Xof
flip214: this? <http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term2/> (not complete, haven't actually done the exam analysis, argh)
8:48:30
minion
Xof, memo from flip214: please send me the link with the full results (whole year) of your 'Using Lisp-based pseudocode to probe student understanding' Moodle experiments; ISTR that you posted that in June or so. I'd like to forward that. Thanks a lot!
10:37:10
xificurC
I have an unknown number of lines I'm reading. I want to work through the entries on-the-fly, not accumulating everything first, because the volume will not fit in RAM. So there's no way to tell the final amount. A line needs to be parsed (json) and processed. The processing is fast, the parsing is slow. I want to parallelize this so I picked up lp
10:37:10
xificurC
arallel. I cannot find a solution that would ensure the tasks pushing onto the queue will really all be processed. This is what I've come up with but it still doesn't ensure all messages get processed. http://sprunge.us/1dK2Yw
10:40:07
|3b|
ACTION would probably have the worker loop waiting on the queue, and exit loop when it gets an :exit message (or anything distinguishable from input)
10:40:56
xificurC
|3b|: I thought about that, but who should send that message? How can I make sure that message gets on the queue last?
10:42:47
xificurC
does lparallel.queue guarantee that (submit-task channel (lambda () (push-queue :first))) (push-queue :second) (eq :first (pop-queue)) (eq :second (pop-queue)) ?
10:43:06
|3b|
ok, with submit-task, i think reader needs to count tasks and loop on (Receive-result) until it gets that many
10:43:43
phoe
if you have eight processors, then you'll need to (finally (loop repeat 8 do (push-queue :exit)))
10:44:43
|3b|
with submit-task you don't need a specific :exit task, since each line is a separate task, you just need to wait for all of them to finish
10:45:40
|3b|
in either case, if lines are independent and you want more than 1 parser task, you can just add a serial # and sort results at end or something
10:48:08
phoe
in that case program your code to handle the error somehow and return a result anyway.
10:48:30
phoe
and then when you count the results, you can catch the "error happened" results and filter them out.
10:49:33
xificurC
(let ((channel (make-channel)) (x 0)) (loop repeat 1000 do (submit-task channel (lambda () (incf x) :done))) (loop repeat 1000 do (receive-result channel)) x) --> anything between 960 an 1000
10:49:37
|3b|
return success/fail and count in the receive results and you don't have to use atomics
10:50:30
|3b|
it has to read, add and write. another thread could read before the write, and then overwrite the first write
10:50:57
|3b|
most threaded lisps have an atomic incf that only works on specific types of data (since otherwise it would need locks)
10:53:43
xificurC
yeah, so to sum up the only way to make sure is to count the lines that were processed and receive that many results. Counting needs to be done either atomically or with some locking mechanism. So I need to look at sbcl if it provides an atomic increment or go for a lock
10:57:27
xificurC
I'm reading lines from a stream. I can't wait for the stream to be read completely, so I have to fire up the receiver loop before there's anything to receive
10:57:29
|3b|
actually, reading closer, you were using both strategies... you submit a task to add to the queue
10:58:45
xificurC
|3b|: I was using a queue inside because I need to start the receiver loop before reading all lines
10:59:30
shka_
ok, so if it was a file, i would recommend cl-ds line-by-line -> process -> in-parallel pipe
10:59:43
|3b|
(unless you want stats while reading or something, but even then just counting results is probably fast enough to do between lines)
11:00:57
shka_
xificurC: you will need thread to read stream, lparallel:future that will do processing, and lparallel.queue with fixed size to get the result
11:01:02
xificurC
I can't have the 1 loop that is reading the stream both submit and receive because then I'm blocking and processing line by line again
11:01:11
|3b|
(like printing progress report, or writing output if parsed data won't fit in ram or whatever)
11:02:28
shka_
so basicly do the following -> on thread A: read line, create future, push it into fixed size queue
11:04:12
shka_
basicly two independent loops, in two threads, processing on the lparallel worker, and threads are talking to each other with queue
11:04:14
xificurC
shka_: I'm ok with the processing part, not ok how to ensure all lines get processed
11:05:24
shka_
or you can handle error in the worker, and return tuple of results, where first value is processing result, and the second is the processing status
11:06:07
shka_
don't have time right now to demonstrate how this works, but maybe in the evening if you want to
11:06:12
|3b|
looking at code some more, you are probably losing results due to lack of atomics/locks too
11:07:05
shka_
anyway, this approach maintains upper memory usage, does not require explicit locks on your side and usually works good enough
11:13:11
xificurC
shka_: I understand your solution but don't see how it solves my problem of ensuring all input gets processed. The 2 facts that it's coming from a stream and is of unknown length make the problem hard
11:14:19
|3b|
untested, and assuming add-*log doesn't actually do anything beyond PUSH, adjust as needed
11:19:51
xificurC
if I'm reading correctly this will read all lines and submit them as tasks first, eating up a lot of memory
11:20:00
|3b|
updated again, don't close over loop variables unless you checked spec to know it is safe (and know why it wouldn't be)
11:20:43
|3b|
you can add rate limiting if needed, though you need to merge the 2 loops in that case
11:24:37
xificurC
|3b|: I could move your receiving loop before the submitting one and receive with try-receive-result and a timeout perhaps?
11:25:24
xificurC
so the setfing would take place on the main thread, and when the try-receive-result succeeds it will increment/decrement some counter
11:27:38
xificurC
the result is small, it's just a small report. What takes a lot of memory are the lines and the jsons parsed from them
11:29:11
|3b|
so in that case, need rate limiting, and we either need to merge the read and process loops or run them at same time
11:29:59
|3b|
so i assume your add-slowlog and add-groklog just extract some part of the parsed json to save?
11:30:41
|3b|
ok, need to think about what primitives are easily available (do you care about running on anything besides sbcl?) to decide which
11:31:10
shka_
xificurC: my conclusion is that every line of stream will be read, sent to process, and result of it will be available on the consuming end of the queue
11:31:35
xificurC
I'm not sure I made this clear but the main issue encountered was OOM. When the pipeline is processed as-we-go the GC can clean up some space
11:32:10
|3b|
looks like bx-threads only gives us locks and conditions, so need to either find an atomic portability lib, use sbcl specific, or use locks for rate limiting
11:33:15
xificurC
shka_: what is the behavior of the fixed size queue when it gets full and there's a push?
11:33:15
|3b|
reader loop will fill queue with entire input text if parsing is much slower than reading
11:34:04
|3b|
for first we need to rate-limit submitting tasks, for latter we need to check results regularly
11:37:09
xificurC
it only took me 1 hour here. 1 hungry hour. gotta eat something. Thank you very much for your help
11:45:53
|3b|
if not, i think it needs separate threads to submit and receive, which means need to synchronize count between them :/
11:50:48
shka_
as i said, one producer thread, one consumer thread, lparallel workers and queue between producer and consumer
11:51:20
|3b|
shka_: you mean running your own worker loops instead of the kernel api (submit-task+receive-result)?
11:52:13
shka_
i mean literally this: thread A reads stream, pushes lparallel:futures into the queue
11:53:59
|3b|
yeah, though not quite what i wanted last time i did threaded stuff, so probably need to implement my own thing anyway :(
11:54:36
|3b|
ACTION needs to run some things in specific threads, and make sure some tasks don't block
11:57:25
|3b|
if i'm just using it to start threads that each run a single task indefinitely and i manually schedule small work between them, might as well start my own threads :)
12:07:56
shka_
i want to create pathname based on it, such that result will contain n deepest elements of the original pathname
12:10:35
|3b|
pathname object not a namestring (or a normal string containing a posix path or whatever)?
12:12:03
gendl
(setq pathname "/usr/lib/foo/bar")(make-pathname :directory (append (list :absolute) (subseq (pathname-directory pathname) 2)))
12:15:08
jackdaniel
my take would be: (make-pathname :default *pathname* :drectory (let ((d (pathname-directory *pathname*))) (list* (car d) (cddr d))))
12:15:24
|3b|
hmm, 'valid pathname directory' definition doesn't allow for keywords (aside from 'implementation defined' case)
12:15:45
pfdietz
You can create a new pathname defaulting to the old one, but with the :directory set to a different value. However, I'm not sure what "deepest n" would mean in the presence of :wild-inferiors.
12:16:13
specbot
Restrictions on Examining a Pathname Directory Component: http://www.lispworks.com/reference/HyperSpec/Body/19_bbdc.htm
12:18:26
|3b|
also have to handle (or specify away) the case of directory only pathnames vs file pathnames since example counted the filename as 1 element
12:20:24
|3b|
and specified in ways that doesn't always map well to the systems that ended up popular :/
12:23:50
|3b|
ACTION votes :wild, :wild-inferiors, :unspecific should run DIRECTORY on it then return a list of results shortened to last N components :)
12:30:05
gendl
_death: it looks like enough-namestring is kind of the inverse of what is desired. But maybe it can be used somehow to achieve it.
12:35:42
gendl
anyway so much for my attempts at political correctness. Anything we can do to encourage more females in here is welcome. ELS needs more females on its review committees etc. Let's start a slogan "Lisp is for girls."
13:14:35
gendl
shka_: also check uiop. One of uiop's stated goals is to obsolete little utility libraries like cl-fad.
13:26:00
Demosthenex
so i'm trying to make my restful api downloader multithreaded with lparallel, i've got most of it ok except the rate limiter. each rest request return how many requests against the limit in the headers, and i was calculating a delay based on the returned value which is local for threads, but now i'm hitting the rate limit constantly.
13:27:46
Demosthenex
though maybe there isn't much point, given i can hit the rate limit without threads, i spend most of my time waiting.
13:28:03
|3b|
if just a shared number is enough (and you can find a portability lib or limit implementations), atomics, otherwise a mutex and shared variable
13:28:41
|3b|
yeah, if your rate limit is less than processing rate, threads just give you a bit higher burst rate
13:32:04
Demosthenex
given that i was calculating a sleep from the response header, i thought i was safe :P
13:33:08
xificurC
getting number of CPUs, do I really need an external library like - https://github.com/muyinliu/cl-cpus/
13:37:04
|3b|
ouch, ircbrowse hasn't updated since april, guess that just leaves http://log.irc.tymoon.eu/freenode/lisp
13:41:48
AeroNotix
xificurC: in the SBCL source they use the exact same syscall on Linux to achieve this but there are no public interfaces that expose this value directly.
13:42:51
AeroNotix
xificurC: https://gist.github.com/AeroNotix/7179e3715320aafa6034164748e87fa5 here is a file from sbcl that uses this syscall
13:47:14
AeroNotix
xificurC: if you're dead against using a library for this (don't know why you would be realistically, we have lots of RAM these days) then you can c+p the code which sbcl uses?
13:51:11
AeroNotix
xificurC: It's not a bad thing. If sbcl itself provided a function for this your interface to it would be exactly the same
13:53:34
AeroNotix
xificurC: the main "issue" is that simply the standard didn't mandate a function for exposing the number of CPUs on the host OS. So it doesn't exist in implementations. That library provides that function.
13:53:44
AeroNotix
I don't know what else that library should do aside from what it says on the tin :)
13:57:10
AeroNotix
Can someone run this for me: (progn (ql:quickload :stmx) (and (stmx:hw-transaction-supported?) (= stmx.asm:+transaction-started+ (stmx.asm:transaction-begin)))) ?
14:05:40
xificurC
AeroNotix: doing, on 1.4.5. btw that progn doesn't work, the reader barks package does not exist
14:06:44
AeroNotix
xificurC: derp, yeah I already had the package loaded in my repl. Inside the progn that won't work you're right
14:10:55
xificurC
|3b| shka_, I managed to rewrite my code to indeed run in parallel, all cores are running on 100% after the changes. The only problem is that it's giving wrong results :) http://sprunge.us/lk0LeM
14:13:40
shka_
also (lparallel:future (lparallel.queue:push-queue (ignore-errors (parse-line line)) queue))
14:14:32
|3b|
it fills queue with futures, then remaining futures just sit there doing nothing, and nothing adds to queue
14:14:47
Shinmera
shka_: I don't think pathname-utils already has what you wanted, but it has very similar functions.
14:14:53
|3b|
actually i guess you get 100 results (and any more that happen to fit in before read finishes)
14:19:30
xificurC
just remember that add-slowlog and add-groklog modify slowlogs/groklogs by setfing on it
14:20:22
|3b|
because it might not be a new binding each iteration, so could get overwritten by next iteration before it is processed
14:22:35
|3b|
create a new binding every time yourself, so the closure captures that instead of the (possibly) reused loop variable
14:24:31
xificurC
where can I read about this issue? I'd need a more detailed explanation because I didn't understand it from the short description you gave |3b|
14:26:52
xificurC
|3b|: I "know" C. I didn't write much of it but can read it to some extent. I know some assembly too if that helps
14:28:07
|3b|
imagine if your loop allocates a buffer once, then stores a pointer to that buffer each time it sends work to other thread, then next iteration overwrites the buffer, so all the threads are looking at same data
14:28:21
xificurC
shka_: you mean the part about "dynamic variables and worker context"? Is the loop variable dynamic?
14:29:34
|3b|
that's effect you get when loop only creates 1 binding and just assigns to it every iteration. closure saves that binding, so all threads are looking at same thing (though you are more likely to move it into a new binding within the thread than passing around pointers in c, so less likely to see partial updates)
14:29:47
djeis[m]
The loop variable is lexical, it just might be allocated once at the start of the loop and then shared by each future.
14:30:32
|3b|
other option (and only correct one in C) is to allocate a new buffer (and new pointer) every iteration
14:31:11
|3b|
if LOOP is implemented to create a new binding every iteration, then each closure would only see the binding active when the closure was created, giving same effect
14:31:39
|3b|
CL doesn't specify which way LOOP is implemented, so you have to manually create a new binding yourself to make sure
14:32:13
|3b|
(equivalent to manually making a copy of the pointer to pass to other threads in the single-buffer C example)
14:33:37
xificurC
shka_: your implementation keeps order of lines. Forcing each future in this fashion might block the main thread for quite some time, no? Depending on how lparallel handles the threads there might be more stalls. Although the main thread doesn't do much work so there shouldn't be much time lost
14:34:16
|3b|
(if in CL you reused the same array every iteration, that would actually be closest to the single-buffer C example, and still wouldn't work even with new bindings for each closure, since each binding would still store same array object)
14:34:55
|3b|
xificurC: forcing the future will either parse it in that thread, or get results calculated by a worker thread
14:36:03
|3b|
so it will sometimes duplicate some of the work done by workers if it happens to catch up with them
14:38:54
makomo
hm, does DEFINE-MODIFY-MACRO accepts symbols which name macros as its FUNCTION argument? http://clhs.lisp.se/Body/m_defi_2.htm
14:40:04
makomo
it says "(...) /function/ is applied (...)" but i don't know whether that includes functions only or?
14:40:08
specbot
Constraints on the COMMON-LISP Package for Conforming Programs: http://www.lispworks.com/reference/HyperSpec/Body/11_abab.htm
14:40:50
djeis[m]
ACTION sent a long message: < https://matrix.org/_matrix/media/v1/download/matrix.org/ugKnxdRvPzmLYEuRZwNhMDnb >
14:41:06
makomo
jackdaniel: i mean, what part of that page hints that /function/ could name a macro as well?
14:41:50
makomo
jackdaniel: i guess one could infer it from "the expansion of a define-modify-macro is equivalent to the following: (...)" but i always end up wondering if they thought about all of the implications of the example
14:42:16
xificurC
|3b|: "A future is a promise which is fulfilled in parallel. When a future is created, a parallel task is made from the code passed."
14:42:30
jackdaniel
examples per se are not part of the standard, but this is not an example but purt of the description
14:43:01
xificurC
based on that I would assume a future will be fulfilled by BODY, and forcing means blocking waiting for it. In this case I'm blocking waiting for it's result of pushing on the queue
14:43:02
|3b|
makomo: i don't see anything in the spec requiring it to name anything, function or otherwise :)
14:46:57
|3b|
and maybe print a dot or something to see if it gets stuck, since large enough sleep might make it take too long overall if it doesn't
14:48:02
|3b|
problem i see is that if queue fills up and you try to push onto it, you will block waiting for something to pop the queue, which happens on the blocked thread
16:59:21
Spaceman77
But where should i look if i want to analyze an image pixel-by-pixel and apply filters
17:01:18
Spaceman77
I intend to create a `simultaneous localization and mapping` system that uses only 1 or 2 cameras. I try to create 3D maps from these images and determine the position of the robot
17:04:18
dim
you might like clasp, an experimental CL compiler that uses LLVM so that you can build apps in both C++ and CL
17:06:15
Spaceman77
I have some experience in programming, but on the other hand, Lisp truly looks like it's `the way aliens program`
17:06:48
dim
maybe the cando docker image is usable (https://hub.docker.com/r/drmeister/cando/), do you know Shinmera?
17:07:52
Shinmera
dim: anyway the problem is primarily that a lot of lisp libraries don't run on clasp yet
17:07:54
dim
Spaceman77: it got me about a week of everyday programming to get past the parens/syntax and the basics stuff, including the *whole* syntax and a first approach at the standard api
17:08:29
djeis[m]
Spaceman77: You actually get used to the syntax fairly quickly, and after that it’s not that much weirder than any other language for the usual stuff. The difference is the rabbit hole can go much deeper :)
17:09:13
Spaceman77
shrdlu68: Nothing and everything. I've never seen a single language being praised so much by some high profile people
17:11:40
djeis[m]
Spaceman77: well, every language you learn will help with that, especially languages that work differently than you’re used to. Lisp is definitely a good choice for expanding your horizons tho.
17:12:24
Spaceman77
I like programming. I like solving problems and thinking about abstractions and whatnot until what i have is the most elegant and readable solution.
17:12:46
dim
Spaceman77: a very nice intro to the language is found in the PAIP book, see https://github.com/norvig/paip-lisp
17:13:13
dim
seems like https://github.com/norvig/paip-lisp/blob/master/docs/chapter1.md is readable directly
17:15:38
Spaceman77
I want to learn lisp. I am studying and intend to work in robotics, and it seems like such a "doomed" field. Every problem is almost impossible to solve. There is no true AI, sensors lie etc. etc. etc.
17:17:48
dim
the more you dig into the details, the more it seems impossible that anything works, usually saying that a domain is more complex than another one only means that you've been digging more in that first domain ;-)
17:23:15
Spaceman77
I just want to tackle some impossible problems. Is Lisp an adequate language to tackle impossible problems?
17:24:06
pjb
Spaceman77: AFAIK, it's the only one. (well, perhaps ometa too, but I'm cheating here).
17:24:37
aeth
Spaceman77: What you get in Lisp is being able to trivially do anything at compile time that you can do at run time.
17:25:01
aeth
I would say "Well, within reason." but you can even get input in the user at compile time, but then you basically turn compile time into run time.
17:26:35
Spaceman77
Oh, this is curious. I've heard of this wacky stuff you can do with Lisp. Is it true that Lisp can interpret itself and change itself?
17:26:57
pjb
(defparameter *version* #.(progn (format *query-io* "What version are we compiling? ") (finish-output *query-io*) (read-line *query-io*)))
17:29:15
dlowe
Spaceman77: Well, it means that we don't have to have another language when we want to write code that operates on code
17:29:51
dim
Spaceman77: e.g. in pgloader I benefit from that by parsing the command language that the users give me into a lisp program, that I compile at run-time and then execute
17:30:19
dlowe
Spaceman77: the source code is made of nested lists. The text files that are parsed into nested lists are a convenience for humans.
17:30:33
|3b|
Spaceman77: are you familiar with c++ template metaprogramming? if so, imagine being able to use a sane programming language instead of template hacks
17:30:35
dim
if you've been doing some C/C++ before, imagine that the preprocessor would be in C/C++ rather than this #define pseudo-language
17:30:36
dlowe
Spaceman77: therefore, manipulating the soruce code programmatically is manipulating nested lists
17:34:30
Spaceman77
but "The Book" i keep stumbling upon whenever i see people discussing learning lisp is SICP
17:34:53
dim
Spaceman77: I think PAIP first chapter is a good start, and you can read it online at https://github.com/norvig/paip-lisp/blob/master/docs/chapter1.md
17:41:08
dlowe
yeah, but if you made an assembler, it's possible that even interpreted implementations would let you jump into it
17:45:25
Shinmera
anyway, for sbcl, consider https://www.pvk.ca/Blog/2014/03/15/sbcl-the-ultimate-assembly-code-breadboard/
17:45:49
Bike
putting assembly into different languages is kind of a pain. you know how gcc does it, right? this weird shit with strings and dependencies? not great
17:46:30
dlowe
it looks like https://github.com/sile/cl-asm will do x86 assembly and will execute it in an image with sbcl
17:46:37
AeroNotix
Kind of wish the VOP stuff was more documented with a few more examples/tutorials
17:47:50
pfdietz
You don't want it to (just) be documented, you want it to be specified and formally delivered, so it won't change out from under you.
17:48:41
Shinmera
anyhow, might be easier to just write the asm separately, compile to a shared object and then use cffi
17:49:04
AeroNotix
just would like to see how far it's possible to go within just CL itself rather than resorting to that
17:52:48
Shinmera
well, since you're adding a new operator to the compiler, it needs quite a bit of information to know how and when to use it.
17:53:16
_death
played with sb-assem a while ago.. https://gist.github.com/death/5ec259ef473b982898a3c5e36b21b1cd
17:54:59
Shinmera
I did some experiments with the ssa stuff but never took it beyond experiments https://github.com/Shinmera/3d-vectors/blob/master/ssa.lisp
17:57:02
_death
what does "ssa" mean? to me it flashes up as "single static assignment", but here it looks like "sse"
17:59:21
AeroNotix
https://www.pvk.ca/Blog/2014/08/16/how-to-define-new-intrinsics-in-sbcl/ <- useful
18:12:46
phoe
AeroNotix: https://www.pvk.ca/Blog/2014/03/15/sbcl-the-ultimate-assembly-code-breadboard/ <- same author, other interesting post