libera/commonlisp - IRC Chatlog
Search
7:35:31
hayley
I am fairly sure some molecule with one carbon atom and three oxygen atoms would not be a metal.
9:44:20
pve
Hi! Is it correct that only the &aux lambda list keyword can occur after (.. &key key1 key2 ..) in a destructuring lambda list?
9:46:08
specbot
Destructuring Lambda Lists: http://www.lispworks.com/reference/HyperSpec/Body/03_de.htm
9:49:13
pve
beach: Thanks. And implementations are not allowed to come up with additional lambda list keywords?
11:01:52
pjb
Unfortunately, why the implementation may declare additionnal keywords, the standard doesn't provide a way to know their syntax: do they take 0, 1 or more parameters? Are they changing the parameter kind, or are they just qualifying a "normal" parameter"?
11:03:04
jackdaniel
since they are implementation specific, then their purpose is also implementation specific, so there wouldn't be much benefit from knowing how many parameters they take programmatically (if you don't understand semantics of them)
11:03:16
pjb
(mandatory-1 &unboxed mandatory-2 &others like-rest &fast-key key-1 key-2 &and-some-more-p &and-this a b c)
11:04:03
pjb
jackdaniel: it's not a semantic question, it's a syntactic question. You want to be able to parse lambda-lists, to write macros correctly.
11:08:56
jackdaniel
pjb: if you don't understand the operator (or a lambda list keyword) semantics, then you may at most do moon-walking
11:09:29
jackdaniel
because it in theory may turn around any assumption you will bake into the moonwalker
11:20:30
lisp123
pjb: do you mean implementations don't specify the syntax of additional lambda list keywords?
11:30:25
pjb
lisp123: yes. The standard only provides a way to let them declare the new lambda list keywords via the lambda-list-keywords constant variable.
11:31:01
pjb
#+ccl lambda-list-keywords --> (&optional &rest &aux &key &allow-other-keys &body &environment &whole ccl::&lexpr ccl::&lap)
11:31:58
pjb
but it should be something like: ((&optional *) (&rest 1) (&aux *) (&key *) (&allow-other-keys 0) (&body *) (&environment 1) (&whole 1) (ccl::&lexpr ???) (ccl::&lap ???))
11:33:21
pjb
in the case of macro, you can macroexpand and code walk to learn how the various parameters to the macro are used.
11:33:55
pjb
If a parameter appears in a binding as variable or function name, you know it's not evaluated, but will be bound or fbound.
11:35:18
pjb
Of course, it may also be used at macroexpansion-time so you may miss some information, but in the context of code walking it is enough.
11:48:56
SR-71
I came across this https://viniciusmo.github.io/blog/2013/02/04/programming-challenges-minesweeper/.
12:53:51
beach
pjb: It is actually even worse. Some lambda-list keywords can appear anywhere in the list, and some can appear more than once, so the task of parsing the lambda list becomes very difficult indeed.
12:58:21
beach
I came up with an extensible parser; each implementation would add rules to the parser. Then, at least we could have a library that does most of the work in an implementation-independent way.
12:58:44
beach
Otherwise, jackdaniel is right. We are doomed to have a parser for each implementation.
12:59:07
beach
Yes, but I am not happy with the result, so I am still thinking about this problem from time to time.
13:54:24
Josh_2
I think in the future perhaps I will just use BKNR for a backup, although its probably easier to use cl-store to save transaction information to the disk/postgres
13:58:53
pjb
SR-71: the trick is on this line: https://gitlab.com/informatimago/mine/-/blob/master/mine.lisp#L94
13:59:26
pjb
SR-71: in board games, it's advantageous to allocate cells on the border, so you don't have to clip the coordinates!
17:12:57
Duuqnd
Is there some kind of de facto standard library for handling date and time in a nice (preferably CLOSy) way? I've found myself writing a date-and-time class twice now and would like to avoid duplicating more work.
17:19:27
Josh_2
The author is quek? If so the author has written that you should use local-time instead
17:19:36
Josh_2
https://github.com/quek/simple-date-time/issues/7 Not sure if this is the correct repo
17:54:31
jeosol
After much back and forth, I finally, packaged one of my example use cases in a docker container image and have it run successfully. Build time ~ 42mins, and 6.75GB image size code + data (for the largest, smallest use case ~ 3GB). After testing, processing with docker container option much slower (HTTP response time) than running a simple REPL
17:55:13
jeosol
I guess the main benefit is that it's easier to move round now. This is with latest SBCL
18:13:50
jeosol
Josh_2: Good question. Essentially, this image is a worker and when launched has hunchentoot launched to listen for requests. Most of the large case is the data for it. It's a large simulation grid.
18:14:31
jeosol
If none of it is clear, I plan to give a talk in the next few months. It's painful to get the docker part to work, I am no software/devops guy, but had to learn it the hardway
18:15:48
jeosol
What the image contains, is essentially, how I run the case, I quickload my system, and load these large datasets. It takes longer to run the first time (with REPL) so I save intermediate objects so I can reload them later. I also dump a core of everything for faster startup
18:16:44
jeosol
Yeah, mostly. There are flat files (think csv files) and then I have classes that ingest those and I serialize them to file as intermediate step
18:18:16
jeosol
Development with compile-and-run languages would be painful because to load the system takes time. But with incremental development, I can update the system little by little. Now the docker image(s) contains the workers in the state I need them so I just send request and get the HTTP response back
18:19:19
jeosol
Yes, I have large csv files I load. They are arrays that go into a grid, and for uncertainty, there are several versions (50) for the largest example.
18:20:30
jeosol
That's one limitation of the application, it takes a lot of memory to run it, because I use the arrays to advise the program to take better decisions.
18:21:26
jeosol
In the beginning, I tried writing results to databases, but writing was really slow, so I decided to just use arrays and save them to disk
18:23:01
ad-absurdum
After building the latest SBCL 2.2.6 and running the tests I had a test failure: "Invalid exit status: hide-packages.test.sh". Here is a short excerpt from the test log:
18:23:30
ad-absurdum
/ Running hide-packages.test.sh in COMPILE evaluator modeUnhandled SIMPLE-ERROR in thread #<HIDDEN-SB-THREAD:THREAD "main thread" The assertion (NULL *WEAK-PTRS*) failed with *WEAK-PTRS* =
18:23:51
Josh_2
jeosol: writing to postgres is really fast if you use the correct types. But otherwise the way you are doing it seems really cool :D
18:26:05
jeosol
jeosol: I used postgres exclusively. It has a bulk write copy option which I learned about later later. But at the time, it's slow because I need to push lots of data fast -
18:27:27
Josh_2
I remember my fren sent me a big data coding test he was given for a job he applied for, it came with a 3gb CSV and he had to perform certain operations on it as part of the test
18:27:29
jeosol
It's will be hard to explain, but I can use analogies. It's fluid simulation, something close will be weather simulation. A requirement for these problems is a grid, the finer the grid, the more detailed is the solution
18:27:55
Josh_2
I solved all the problems using CL, but I found that it was practically impossible (due to time) to not use structures with correct type data
18:28:22
Josh_2
I could not use classes and generic types, I had to specify the types in structures otherwise it took a very long time to complete
18:28:30
jeosol
Josh_2: The biggest I pushed to CL was 60GB and SBCL handled it well, though, I didn't save it, just got a file-handle and processed the data to another file
18:29:18
jeosol
Josh_2: I didn't mean it in a bad way, but it's just hard to explain, on this forum. I will have pictures during the talk to make it clear
18:30:10
jeosol
This is one part that computes F(X) for a given X, then the other parts is the optimizer part to find the best value of X that maximizes the objective function. But this part was the most critical and most difficult
18:32:34
jeosol
Josh_2: No it isn't per se, not now, due to other issues beyond me. But I do plan to give access to students in less developed countries trying to do research as done in the west. But access is probably going to be online. For this use case, the container packages everything that is needed, code+data+3rd party exes need to run them
18:34:10
jeosol
The problem is, the latency. On my computer, if I run a small case (one smaller example), it takes < 1 second on my local box. If I run to a docker still on my local box, it takes 3 seconds, and longer if I do a DNS lookup. SO the time begins to add up
18:35:56
jeosol
The benefit is the student has none of the tools, expertise, etc, but has internet access, and can sacrifice the possibly high latency. Everything then reduces to a call.
18:36:53
jeosol
Josh_2: access is just receiving a token at this time, and use whatever language CL, Python to make the requests.
18:39:03
jeosol
haha. I can't afford to put everything in clusters around the world, but it's possible later. Right now, resources reside in central US