freenode/#lisp - IRC Chatlog
Search
21:28:15
no-defun-allowed
The HyperSpec is a bit like a dictionary. It is rarely a good idea to try to learn how to do things from it, but you can clear up your understanding using it.
21:37:40
dbotton
In quickproject the defpackage is placed in package.lisp is there a reason for not placing the defpackage in the actual package at the top?
21:42:30
dbotton
Correct, lotuseater was offering possible answer to my question that multiple files being used for same namespace
21:43:37
mfiano
modules have dependencies, just like the root module. You don't have to load serially
21:43:55
dbotton
So we go back to my question, rephrased, what is the advantage of having a packages.lisp file instead of leaving defpackage at start of files
21:44:52
mfiano
library chosen randomly: https://github.com/Shinmera/verbose/blob/master/verbose.asd#L17
21:45:07
lotuseater
I also had this question in my mind when it comes to frequently changing some files
21:46:52
dbotton
Ie you have to copy out the definition of the package from the packages file when re-using the package
21:47:38
mfiano
I am having a hard time parsing what you are even saying. Maybe someone else can ask the right question so you can ask the right question.
21:47:54
dbotton
Let me give an example, if I have a file that defines a package of string functions
21:48:29
dbotton
If I want to copy that file to a new project, I have to copy that file and also cut and paste from that packages.lisp
21:50:05
Xach
dbotton: if a project is big enough to have a separate package file, it's common to define a system for it, and depend on the system, not copy things around.
21:50:35
Xach
I sometimes copy rather than depend-on, but only for pretty small self-contained things. Single functions, or self-contained files of related things.
21:53:06
dbotton
Ok, so you find in large projects defining all the packages in one file works out well.
22:20:51
fiddlerwoaroof
I've found it more convenient to centralize packages in package.lisp because then, when I want to move a set of definitions to a new file, I just put (in-package :the-package) and copy/paste
22:21:08
fiddlerwoaroof
Ultimately, I want to store the code in the lisp image and treat files as disposable
0:17:19
lotuseater
cepl seems nice but I'm not aware how to start, the videos are for the old version
0:42:03
mfiano
Speaking of compiler macros, I don't remember the last time I needed to write one, but I have a question related to them: Do I have to gensym when introducing a new lexical variable as part of its expansion to prevent unwanted capture in the same case as for regular macros?
1:56:28
_death
mfiano: yes.. e.g., (define-compiler-macro foo (x y) `(let ((z ,x)) (list z ,y))) (let ((z 123)) (foo 1 z)) ==> (1 1)
2:37:13
_death
https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node101.html#SECTION001240000000000000000
5:00:34
fiddlerwoaroof
So, Lispworks is the only implementation (of three) I can get to work on my new laptop
5:03:47
aeth
I'm surprised it works at all. These sorts of things tend not to care about obscure, GCed languages.
5:04:44
fiddlerwoaroof
mmap: Cannot allocate memory / ensure_space: failed to validate 1073741824 bytes at 0x1000000000
5:05:31
no-defun-allowed
aeth: A funny coincedence, while I'm reading about how refcounting is faster on a M1 processor.
5:10:28
no-defun-allowed
It takes 6.5ns to create and free a NSObject on M1/Objective-C, and 6.7ns to create and eventually free a standard-object on x86-64/SBCL. That's a 3% performance increase; something you'd expect from Intel marketing these days :)
5:11:12
no-defun-allowed
fiddlerwoaroof: Seriously though, do you find it any faster? Most articles I find say it's much faster than an Intel MacBook.
5:32:25
aeth
oni-on-ion: Apple is migrating to ARM because (1) its ARM chips are competitive with Intel's x86-64 chips and (2) Apple wants to eventually merge macOS (a cost) with iOS (where all of their profit is)
5:34:28
aeth
on the one hand, JVM would also be hard to emulate; on the other, there might actually be Java apps on macOS worth emulating
5:34:35
theemacsshibe
An interesting headline: "Microsoft contributes to Java port for Apple silicon Macs"
5:35:30
aeth
fiddlerwoaroof: Yes, a disturbing trend. iOS is the world's most popular locked-down platform.
5:38:40
aeth
fiddlerwoaroof: maybe it's good news for iOS, but it's bad news for macOS, and I'm not looking forward to getting bug reports from Mac users.
5:41:28
aeth
I have no idea how you'd even attempt to mix the CL workflow with stuff like this, though: https://lapcatsoftware.com/articles/unsigned.html
5:43:26
fiddlerwoaroof
And, the basic path is straightforward: sign the sbcl executable and always launch your app through that executable
5:54:04
aeth
That's basically saying to avoid save-lisp-and-die exporting even if you do that on every other platform.
5:54:43
fiddlerwoaroof
The objective-c runtime as a whole is a much nicer thing to interact with than the low-level APIs of alternative platforms
5:55:26
fiddlerwoaroof
Anyways, this is what abstraction is for: something like ASDF could make a "thing" that works on whichever platform you're targeting
5:56:07
oni-on-ion
objective-c is one of my top favorite languages, there was a point i made my own runtime. it talked to gnu smalltalk =)
5:58:22
aeth
fiddlerwoaroof: afaik, you can't sign it in a way that doesn't have scary warnings without paying Apple 100 $/year
5:59:26
aeth
According to the article, "Mac developers must sign up for the Apple Developer Program, sign a legal agreement, and pay an annual fee of USD $99 plus tax in order to obtain a Developer ID code signing certificate and upload software to Apple for notarization."
6:01:03
aeth
The only thing that saves CL here is that most non-commercial applications will just be distributed in .lisp source code form for local compilation and running on a presumably-signed already-installed CL compiler
6:01:31
fiddlerwoaroof
The thing is, most applications you could just pop up a loading screen and compile the code on the fly
6:01:56
aeth
Any end-user application written in CL (which can't assume that the user has their own CL installed) will either have to be commercial or money-losing.
6:02:26
aeth
That's not really the common workflow, though. Generally, you export an executable if you don't expect the user to have CL installed.
6:03:02
aeth
No, the output of save-lisp-and-die would, afaik, have to be signed, which costs 100 USD/year.
6:03:58
aeth
Now you have to distribute a matching SBCL version. Java can get away with this because lots of things use Java.
6:04:29
oni-on-ion
hmm lisp (fasl) can be disasm quite easily ? can one only distribute the fasl/core and are those somewhat protected ?
6:05:17
fiddlerwoaroof
But, if you're not distributing to developers, you'd need all this stuff anyways
6:06:02
fiddlerwoaroof
You also need some functions like this to grab resources out of the app bundle: https://github.com/cjdev/aws-access/blob/master/src/objc-utils.lisp
6:08:48
aeth
Alternatively, $99/year is like an entire computer a decade, on top of having to actually buy an entire Apple computer a decade in order to test/build the application.
6:11:02
aeth
Breaking the law in the process of making your software is essentially a non-starter for distributing software, anyway.
6:11:44
fiddlerwoaroof
I mean, if you're going the commercial route, you're not going to worry about $99/year
6:12:39
aeth
If you're going the commercial route and not supporting iOS, then there's a very good chance that the money you make from macOS will not be enough to make up for the increased expense for supporting macOS.
6:13:01
aeth
Especially when Apple goes out of the way to act like they're in Microsoft's position, not a < 5% position.
6:15:49
aeth
My main concern with macOS, though, would be if they require everything to go through the App Store, like with iOS. Obviously not literally, since you wouldn't be able to develop for macOS (or iOS!) if they did, but contractually, it could be required.
6:16:43
aeth
Every indication is that they're moving to a macOS-iOS merger in the long run (except touch support, which afaik is still missing from macOS, unlike, say, Windows)
6:18:03
aeth
Considering how long everything I do takes, I wouldn't be surprised if Apple locks down their desktop/laptop platform before I even complete anything that I would want to export as a binary.
6:37:23
fiddlerwoaroof
My work mac looks like this: 166.41s user 9.68s system 165% cpu 1:46.50 total
6:37:53
fiddlerwoaroof
My 16 core Ryzen 2 box like this: 140.36s user 2.52s system 202% cpu 1:10.62 total
6:42:16
aeth
Not surprising... it's only going to be competitive to roughly contemporary AMD going forward unless Intel can turn themselves around. And I wouldn't be surprised if Intel finds a way to fall into #4 or #5 by 2023.
6:43:35
aeth
We should do a gofundme to make a Lisp machine CPU on TSMC 5 nm so we can be #3 ahead of Intel
6:45:24
fiddlerwoaroof
Since everything is built on the Objective-C runtime (more or less), Apple devices are more like a Lisp Machine than most Linux devices
6:48:26
no-defun-allowed
aeth: I want to make a list of (reasonable, RISC-y) things to put in a Lisp-oriented extension to RISC-V.
6:54:45
beach
no-defun-allowed: I have probably said this before, but my bet would be on some write barrier or read barrier.
6:56:56
no-defun-allowed
That's usually done with the MMU, marking pages as read-only and trapping writes (for a write barrier), no?
6:59:10
fiddlerwoaroof
So, if I understand correctly, they reserve several bits for information about where the pointer points to
7:01:27
no-defun-allowed
I recall a similar thing happens with Nettle's replicating copying collector, but it's worse as that normalization requires a memory read (to follow the forwarding pointer).
7:02:36
no-defun-allowed
But in applicative languages like ML, which it was designed for, EQ supposedly is rarely used; though testing reference equality is favourable to testing structural equality.
7:03:53
beach
no-defun-allowed: Yes, there are tendencies like that, e.g. "Our language semantics makes everything expensive anyway, so using an inefficient mechanism here doesn't matter much".
7:06:02
aeth
no-defun-allowed: I personally wouldn't make extensions unless you actually had real programs from real implementations running and profiled them and determined what the bottlenecks are that are addressable with new instructions rather than with better compilers
7:07:44
aeth
I'm curious as to if e.g. making a 68 bit architecture with 4 tag bits might be the way to go, not for performance, but for 64-bit fixnums and unboxed double-float.
7:08:39
no-defun-allowed
Now you have the problem of finding 68-bit memory. And you may want another bit for incremental marking.
7:09:46
beach
It is entirely possible that any useful extension would not be RISC-y enough to be considered.
7:13:09
beach
Not that I know what it would be. I already know how to do it in a not-too-costly way.
7:32:39
no-defun-allowed
The code that calls a generic function would be replaced with a call to the last effective method (with a prologue that calls the generic function as usual, if the last method is no longer applicable).
7:33:40
no-defun-allowed
It has been done on normal CPUs with Smalltalk, Self, Java and JavaScript at the least; so it may not be necessary to provide any processor support.
7:34:33
beach
The technique I was thinking of consists of using type information in the caller to create a call-site-specific discriminating function, and change it when the generic-function changes.
7:35:59
no-defun-allowed
The polymorphic inline cache described in <https://bibliography.selflanguage.org/_static/pics.pdf> is a bit closer to your technique, but it still picks the most common methods from runtime information.
7:38:31
no-defun-allowed
The only support that needs is that you need to be able to overwrite the call site, but that's not a big deal; and also doable on stock hardware.
7:48:44
no-defun-allowed
The one idea I have which may be better done in hardware (which gilberth gave me) is computing the address of an element in arrays of different element types when performing a generic AREF. (SBCL calls that "hairy", which is much more fun to say than "generic" though.)
7:58:49
beach
no-defun-allowed: I am thinking that things like AREF and CAR/CDR are usually done in a loop. So then, it is possible to make the type of the object a loop invariant.
8:00:33
beach
Plus, application writers who care that much about performance would stick in a type declaration that could then be verified quite easily.
8:02:23
beach
Speaking of which, I am convinced that Common Lisp implementations and their compilers are full of "optimizations" that are basically useless.
8:04:20
beach
In the case of SBCL NIL, it makes it necessary to do two tests in a loop over a list, rather than a single test, in order to determine the end.
8:05:14
beach
Though stassats assures me that there is no performance penalty for making two tests rather than one.
8:07:18
aeth
What I'm more concerned about that maintenance is compilation times. A lot of compilers are incredibly slow to get that extra speedup at the end, and it's questionable if it's worth it, especially if you translate that sort of approach to a CL compiler, which will be compiling more. (Yes, CL has optimization levels, but they're not that flexible)
8:08:05
aeth
People have complained about SBCL compilation times, but I don't really see that, except with a few edge case libraries (is Ironclad still really slow?)