freenode/#lisp - IRC Chatlog
Search
1:16:35
jmercouris
Can someone explain why write-to-string for my object results in extra quote symbols? https://gist.github.com/d0a026f36d315d36f0c42787216714a7
1:26:16
jmercouris
It is a GTK type of issue, basically I can't add things to a GTK list store (model part of mvc for gtk tree view) that are not strings, therefore I have to maintain two lists, one of my actual objects in my model, and one of their string representation
1:26:39
jmercouris
So I know what I need to do, it's just kinda a dumb limitation to the way GTK works, I never thought I'd be glad for cocoa delegate model
1:28:24
jmercouris
p_l: Not that I am aware of, I'm using cl-cffi-gtk, which seems to be a very thin layer over C
1:31:43
jmercouris
It is okay though, it's just for the first release of GTK, I'm sure when some GTK users use it, they'll submit some PRs, at least I hope :D
1:35:46
jmercouris
Apparently you can also pass pointers to the model, interesting: https://en.wikibooks.org/wiki/GTK%2B_By_Example/Tree_View/Tree_Models
1:36:20
jmercouris
I can't imagine how they would render a pointer as a column in a GTK Tree view though
1:41:11
p_l
jmercouris: http://www.crategus.com/books/cl-cffi-gtk/pages/gtk_class_gtk-tree-model.html ?
1:51:52
p_l
jmercouris: yes, and the library provides you with an interface to subclass something that acts as GObject
1:54:04
jmercouris
p_l: I didn't see that actually, but the issue is that I need to display any kind of arbitrary object within the tree view
1:54:50
jmercouris
p_l: It is for my minibuffer completions, they are just the lisp objects passed to the view to be rendered, in cocoa I do (write-to-string object) and it'll show that object representation for any arbitrary object I add into the model
1:59:29
p_l
hard to say, but I suspect it might just as well be possible to subclass and override the list-store one
1:59:48
p_l
also, just found confirmation that you can subclass GObjects in cl-cffi-gtk: http://www.crategus.com/books/cl-gtk/gtk-tutorial_16.html#SEC201
2:00:47
jmercouris
but I don't want to alter the completions by having to make them be a subclass of GObjects, because it has to work for both Cocoa and GTK
2:02:36
jmercouris
At any rate, the completion will eventually pass back the original object to the c
2:05:07
aeth
pagnol: You probably want to use helper functions within an eval-when when macros get really complicated.
2:05:20
p_l
jmercouris: you could probably subclass a GtkLabel or something like that, as a wrapper around any random CL object, then put it into list-model
2:06:01
jmercouris
p_l: That's a creative approach, make like a subclassed gobject container object?
2:06:32
jmercouris
Why not just something like gojbect-container-object with a slot for a lisp object
2:07:09
p_l
jmercouris: yeah, subclass gtk-label, adding an extra slot (reference to CL object) and applying an afer method on initialize-instance that sets up the label to be stringified form of the passed in object
2:08:00
pagnol
aeth, I have set up a macro that defines a class and would like to have another macro that lets me operate on instances of the class defined by the first macro
2:08:56
pagnol
the second macro would like some knowledge that the first macro had... so I'm wondering if I can store something globally for the second macro to pick up?
2:10:07
aeth
pagnol: You could create functions instead of macros, and ideally put that knowledge within the defun you're generating. The general advice in this channel is "Does that have to be a macro or can an inline function work?"
2:10:34
p_l
jmercouris: hmm, it looks like you can't just put a widget reference in TreeModel, but you can put in any random crap and write your own renderer
2:11:45
pagnol
it's probably possible... probably since I just learned how to use macros I'm trying to apply them a bit overzealously
2:12:32
jmercouris
p_l: I think I'll do things the hacky way for now, I don't really care to become a GTK expert, it'll take time anyway for me to learn it
2:12:35
aeth
You could even put a closure around the function instead of making that information global, e.g. `(let ((foo ,vital-information)) (defun ,generated-function-name (x) (some-function-call foo 42)))
2:12:56
p_l
jmercouris: well, if you're willing to go a bit crazy, there's always the option of using cocoa on other platforms
2:15:22
jmercouris
p_l: Oh yeah, definitely, I've seen +cocotron inside the CCL sources, but I wouldn't get any benefits of cocoa, because I don't believe cocoa webkit is ported to Linux :P
2:15:49
jmercouris
I'll actually eventually even have to change the completion model because I would like to make the interface effectively like a server
2:16:07
jmercouris
where the Lisp code is the client/model, and the GUI is the server, and just contains EXTREMELY minimal code
2:18:09
jmercouris
this means if servo comes out tomorrow, I could adapt nEXT to use servo, with basically one or two weeks worth of work
2:18:59
aeth
pagnol: Just as a rule of thumb: macros deal with bindings and places (I hope I'm using the correct terminology). So: define-foo, let-foo, do-foo, with-foo, etc., or any use of define-modify-macro such as inc-foo-f.
2:49:25
jmercouris
p_l: You're not wrong, in general, but the API is extremely small, and IPC can happen over sockets in the worst case scenario
4:09:00
pjb
jmercouris: (symbol-name '\") #| --> "\"" |# (character (symbol-name '\")) #| --> #\" |#
4:13:59
pjb
STRING takes a string designator, which is either a character, a string or a symbol, and returns the corresponding string.
4:15:20
pjb
CHARACTER takes a character designator (which is a character, a string with 1 character or a symbol whose name is a string with 1 character) and returns the corresponding character.
4:15:54
pjb
So those functions let you go between character, string (of 1 character) and symbols (whose names are string of 1 character).
4:54:14
fiddlerwoaroof
The most annoying thing to me about the CL standard, is there isn't anything in it besides READ-FROM-STRING and PARSE-INTEGER for converting strings to numbers
4:54:30
fiddlerwoaroof
So, you have to use something like the PARSE-NUMBER package to parse arbitrary numbers
4:56:52
beach
I don't understand this argument. So many people are using languages that don't even have a standard. Why is it a disadvantage to have at least SOME features standardized?
4:58:24
fiddlerwoaroof
I'm just saying that it's a bit awkward to have PARSE-INTEGER but not PARSE-FLOAT or similar
4:58:59
aeth
fiddlerwoaroof: We should keep a list of things that should be added to the standard one day.
4:59:09
Bike
but it would be nice if it was in the standard, since the implementation has to have it anyway
4:59:12
beach
All those people who use non-standardized languages use features that are not standardized all the time.
4:59:33
aeth
beach: I think the general argument is that there should be additions to the standard, not that there should be no standard.
4:59:36
p_l
well, by now, we have an unofficial living standards as well, could be nice if it was somehow codified
5:00:35
fiddlerwoaroof
It's more a matter of aesthetics: there seem to be a couple lacunae in the standard that are a bit surprising
5:01:16
fiddlerwoaroof
I'm perfectly happy, though, with the way, say, threadings and sockets are standardized.
5:01:23
aeth
p_l: A good starting point would probably be de facto minimum sizes. The size minimums (e.g. for maximum array lengths or fixnum bit sizes) in the standard are tiny and afaik 16-bit-friendly. It would be enlightening to see what the actual minimum sizes are for actual 32-bit and 64-bit implementations, especially if there is a de facto standard that most follow.
5:02:01
aeth
p_l: e.g. A de facto minimum fixnum size for 64-bit implementations should be 60-bit, since that afaik safely covers all of them except clisp.
5:02:25
aeth
If non-conforming implementations can be patched to obey de facto minimums, library authors can make more assumptions.
5:02:50
aeth
Bike: Right now, any string longer than 1024 might not run on a conforming CL implementation.
5:02:59
beach
I personally think that there are more urgent things to do than to attempt to improve the standard. We need programming tools and libraries.
5:03:01
p_l
aeth: fossilizing ASDF a bit and including it in said standard would cover a pretty big hole left by politics and even worse reasons
5:03:28
Bike
the whole idea of things being declared fixnum instead of unsigned-byte whatever is kind of unfortunate imo
5:04:02
aeth
Bike: strings are just character arrays, so afaik array-total-size-limit of 1024 would apply. This could even make some longer docstrings not fully portable. http://www.lispworks.com/documentation/HyperSpec/Body/v_ar_tot.htm
5:04:28
aeth
Bike: If nothing else, it would be nice to know what array-total-size-limit's actual practical minimums are in 32-bit and 64-bit
5:05:02
fiddlerwoaroof
I used that at one point, but it's a bit annoying because I gather that the GTK developers are a bit inconsiderate of non-Gnome users of GTK
5:05:19
p_l
"where's the gui library" is veeeery common question ultimately, and the answers were always full of problems...
5:05:56
fiddlerwoaroof
I wish LispWorks would just open it up so someone could port it cross-implementations
5:07:06
fiddlerwoaroof
I would commit to mcclim a lot more, if I could make it run on all my platforms :)
5:07:16
aeth
fiddlerwoaroof: Imo Gnome 3 killed GTK, it's just taking a long time for the slow death to happen
5:07:35
p_l
as for programming tools and libraries - I'd love a HTTP/2 lib, better crypto libs (ironclad isn't), Grpc lib, convenience libraries for writing servers/clients, REST-style
5:07:37
fiddlerwoaroof
So, right now, I'm mucking about with a cross-implementation objc/lisp bridge to fix the mac side
5:08:17
fiddlerwoaroof
Hmm, my experience has generally been that GTK is better than Qt as far as cross-platform gui libraries go
5:08:28
p_l
(fun fact: Wayland + a ton of custom behaviours is the only really supported GTK3 platform)
5:09:29
fiddlerwoaroof
But, it's a pain to setup a development environment for such a thing on a laptop that's always on the brink of running out of disk space
5:10:44
fiddlerwoaroof
aeth: I think you need both approaches, there's so much between distance the top and the bottom that it would take forever
5:11:19
aeth
fiddlerwoaroof: Emacs is almost there with Emacs Lisp, too bad Emacs Lisp is essentially stepping in a time machine back 30 years.
5:14:14
aeth
The main applications I run in stumpwm are emacs, lxterminal (with zsh), and firefox, though.
5:15:09
fiddlerwoaroof
Like, highlight some text, hit <C-/><RET> and open a google search for the text
5:19:17
aeth
CL needs an editor (with replacements for slime and magit), a terminal, multiple shells (standard sh, extended sh like bash or zsh, and sh mixed with CL like eshell), and a web browser. Then, running on stumpwm, you get a much larger percentage of your time actually in the CL ecosystem.
5:19:44
aeth
Of these, only the multiple shells one sounds even remotely doable because it's not graphical
5:21:55
fiddlerwoaroof
I tell my coworkers that the best implementation of vi is written in emacs lisp
5:22:26
aeth
You'd need equivalents for (at least) paredit (and its numerous competition), magit, evil, and slime
5:22:44
aeth
You'd also probably need modes for other languages because usually people don't program exclusively in CL, which complicates things
5:25:12
aeth
A CL editor would probably have to support C, C++, Python, Perl (although that's increasingly irrelevant), Scheme (at least Guile and Chicken), Racket, and Clojure.
5:25:47
jmercouris
aeth: I think tomorrow I'll release an alpha, and then maybe make an official release sometime this week, if I can get cl-webkit to run on my machine
5:27:36
aeth
jmercouris: Performance. Better code quality. Better interoperability with the CL ecosystem.
5:28:36
aeth
jmercouris: Emacs is a terminal application that glued on a GUI later, so it has really ugly internals afaik.
5:29:48
aeth
fiddlerwoaroof: I was just thinking about things that one might have to edit in a Unix environment. That will include C, C++, Python, Perl, shell, and even possibly Ruby in some distros.
5:30:46
aeth
jmercouris: Just being able to run the Lisp portion of an editor in a fast lisp like SBCL instead of a slow lisp like Emacs Lisp would be a huge win, without much changed.
5:32:29
jmercouris
pjb: Some people install one billion packages, and then complain when their nyan cat rainbow modeline slows down their system
5:32:54
aeth
Emacs is a horrible application. I don't use ERC because I persist my IRC, and I don't persist my Emacs. I restart my Emacs as often as possible. Helps with memory.
5:33:59
aeth
It wouldn't be hard at all to write something more reliable than Emacs. Just keep safety above 0.
5:37:42
fiddlerwoaroof
Hmm, should give circe a whirl: using trim-buffers, erc seems to be fairly fine for me
5:38:25
aeth
jmercouris: I have 16 GB of RAM on my desktop (although I run IRC on a machine with 512 MB, where memory usage is very important), but even when I have 64 GB of RAM, I'll still hate when RAM gets wasted.
5:38:51
jmercouris
aeth: All ram is lost, like tears in rain, do not fight against electron.js, embrace it
5:39:20
jmercouris
Ram is like sand, the harder we try to hold it, the quicker it runs through our hands
5:39:24
aeth
I aggressively fight RAM waste and I'm still at 1.66 GB before I do anything interesting with it. If I didn't fight it, I'd probably start at 3 GB or more
5:40:27
p_l
jmercouris: I fight hard against RAM waste because I already lost the fight on web front
5:41:26
aeth
I don't like wasted RAM, but I do prefer CPU performance over RAM efficiency, e.g. I prefer SBCL over alternative CLs even though SBCL starts around 40-60 MB before doing anything.
5:42:03
fiddlerwoaroof
I find that system reponsiveness is more affected by ram usage than cpu usage these days
5:42:26
fiddlerwoaroof
e.g. if my desktop starts feeling laggy, I close all my tabs and everything is snappy again :)
5:42:38
aeth
freshly started up (and my OS is not on a recent SBCL!) 49 MB in a terminal, 51 MB in SLIME
5:43:39
p_l
top three apps on my system: Chrome using 4GB (surprisingly little, I suspect shenanigans), VMware using 2GB (for a 2GB VM, expected) ... Slack = 1.2GB :|
5:45:30
parjanya
if I create a stream with (process-output (run-program "/usr/bin/ls" '() :output :stream)) , how can I read it?
5:45:44
aeth
Basically the only thing that uses unjustified amounts of RAM are web browsers or things that use web browsers like Electron apps
5:46:17
aeth
The good news is this: If people can tolerate Electron adding 1 GB per Electron app, they can probably tolerate SBCL adding 100 MB per CL app.
5:46:41
p_l
aeth: and then there applications that take "electron" and put "even more unjustified resource usage", like Slack
5:47:14
aeth
If anyone has (64-bit) Electron, it would be interesting to see how much RAM a hello world takes. It would probably take too long for me to download just to test that.
5:48:15
aeth
And it would be interesting to compare this to the worst case of a graphical CL hello world app, which is probably going to be 90-120 MB counting the base CL
7:11:44
eviltofu
If I want to make a GUI app in common-lisp that runs under macOS, what should I use?
7:12:52
pjb
You can add my Objective-CL reader macros if you want to have an Objective-C -like syntax.
7:14:17
jackdaniel
so it is certainly possible to run it on macos, but it doesn't have this "native" feel
7:14:19
pjb
If you don't want to make a Cocoa Application, then you can use any implementation with CLIM.
7:25:04
aeth
eviltofu: There's also another kind of graphical application that's possible, too. e.g. https://github.com/vydd/sketch#sketch
7:25:21
aeth
These other things all use OpenGL, so you're definitely not going to get a native feel.
7:29:18
aeth
Some people have built entire GUIs on top of OpenGL, though, e.g. the Blender 3D modeling application.
7:30:26
aeth
Also McCLIM used to have an OpenGL backend, but it was removed here, probably because it was over a decade out of date. https://github.com/robert-strandh/McCLIM/commit/d1bc1e222b36adffa5a76e4c664d1b365d2144e4#diff-b81146e6af95bf6e22cf57459890216f
7:31:01
eviltofu
Sketch is interesting, I was looking at Processing a few weeks back for some coding fun!
8:03:25
jdz
Which OpenGL? On which platform is that specific OpenGL supported, and FFI available for CL?
8:04:41
jdz
I bet OpenGL will be obsolete soon enough, with different vendors moving in different directions: MicroSoft -> DirectX, Apple -> Metal, Others -> Vulkan.
8:06:05
earl-ducaine
Clutter is a good example of modern, full feature graphic toolkit built on top of open GL.
8:08:57
pjb
fiddlerwoaroof: ccl doesn't define any objective-c specific reader macro; they're reader macros used for the FFI in general. I wrote my own Objective-CL reader macros to provide the Objective-C syntax to CL.
8:11:21
Shinmera
If anything, OpenGL has been gaining massive traction in recent years thanks to the web and mobile tech
8:13:28
earl-ducaine
Wing 3d is an interesting 3d modeler. GUI and canvas are both implemented in open GL.
8:13:34
earl-ducaine
What's interersting from a CL perspective is that it's a clone of Nendo which is a stripped down version of Mirai, which is what Symbolics' S-World suite eventually became after being purchased by Nichimen.
8:17:26
jdz
Also relevant to this discussion: http://blog.johnnovak.net/2016/05/29/cross-platform-gui-trainwreck-2016-edition/
8:31:16
whoman
ACTION is only concerned about HDD lag from swap for general desktop useage. dev tools should be hyperspeed negative latency
8:34:33
aeth
In the world where Electron apps are commonplace, even SBCL is absolutely lightweight in RAM.
8:38:44
jackdaniel
for some of thigns I try SBCL heaps blows above 8GB while CCL fits nicely in 2GB (during compilation), so that's a valid concern for me
8:45:38
loke
jdz: That article dismissed GTK for adding 20 MB to the code size. That's a bit silly, IMHO
8:46:01
loke
His requirements are not only that he wants to graw graphics and stuff, he also wants it really small.
8:48:58
aeth
jackdaniel: But, on the other hand, it's a lot easier to avoid allocation in SBCL through profiling and disassembly, at least ime.
8:49:14
aeth
When I want to write some large thing that doesn't allocate, I'm only sure that it doesn't allocate in SBCL.
8:50:58
jackdaniel
and the whole article seems to be written in purpose to justify creating yet another toolkit (this time in NIM), what is even more silly. If he feels he want to make one, then he should do it, but making excuses is not something I'm going to appreciate
8:51:36
beach
I think there is a serious lack of back-of-envelope calculations. Instead, what I observe is typical Kahneman stuff; the fast brain module is lousy with math, and the slow module is lazy so it believes the fast one.
8:54:53
jdz
Even if the conclusion is not useful the article still includes more data points than "Blender uses OpenGL for GUI".
8:57:20
aeth
jdz: Using OpenGL means, afaik, consistent font rendering on all platforms that consistently looks different from what every other application does. But, yes, you'd have to write it yourself or use a library.
8:57:23
jackdaniel
beach: on the other hand (regarding memory use) I totally get why someone could want to have small memory footprint *even if he knows* that memory is cheap. If you may have exactly same functionality in 10KB vs 10MB (given that both are coded reasonably well), the former solution is more elegant by some metrics
8:58:23
beach
jackdaniel: Sure, but that elegance comes at a cost. And I rarely see that cost evaluated against the gain in memory.
8:59:05
beach
jackdaniel: Plus, someone who is obsessed with memory size probably makes other elegance compromises instead.
8:59:59
aeth
Focusing on a small amount of memory makes perfect sense for a server (especially a home server, which should be quiet and low-power). It's probably not as important for graphical applications since that server will be running without X.
9:00:11
jackdaniel
right, that's why I agree that saying that "it is 20MB, so it's bad" is a silly argument, yet I acknowledge that sometimes this argument may be very valid one (and can't be debunked with saying, that someone forgot to do his math, or was lazy)
9:01:50
aeth
Personally, I'm a bit selfish. I usually am very wasteful of RAM when I program, but I don't like other applications using it up (less RAM for me!)
9:02:09
loke
By looking at the Nom web site, it seems to me that the users have similar values as the Go crowd.
9:02:43
loke
(I'm old enough to remember Unixes without dynamic linking support at all, and it was not fun)
9:04:45
aeth
loke: A potential counterargument: we have a lot more space now. Even 1 TB SSDs are very affordable.
9:06:04
jackdaniel
dynamic linking goes beyond that - updating library requires updating only one shared object
9:08:13
jackdaniel
sure, did you ever use gentoo? if you don't have dynamic linking it gets even more messy
9:09:38
aeth
I think the future is probably going to be sandboxing all over the place, which could be the death of dynamic linking.
9:09:56
beach
jackdaniel: It is not a matter of debunking, but of getting the full picture. I am perfectly happy to see an argument such as "yes, it is going to take me a year of full-time work to save a small amount of memory, but I think it will be a lot of fun to try", instead of seeing just half the argument.
9:10:53
beach
jackdaniel: I would hate for people to tell me what to put work into, so I certainly do not try to tell others what they would be doing.
9:12:17
jackdaniel
OK, I get it better now, yet sometimes it's not very obvious from the context for me (probably fault in my understanding)
9:20:42
loke
aeth: Let me tell you, they are doing something interesting, but man is the approach flawed.
9:48:07
loke
p_l: I was about to counter with some worse toolkits... However... Now that I think about it, I do think you're right.
11:20:48
phoe
I have a class whose slot contains a STATIC-VECTOR. If I understand this correctly, I can use TRIVIAL-GARBAGE:FINALIZE that closes over this static vector to make this memory reclaimable by the garbage collector, like, (let ((vector (static-vector instance))) (lambda () (free-static-vector vector))). Do I understand it right?
11:22:17
Shinmera
Because then you'd create an indefinite extent reference and prevent it from being GCd.
11:22:55
phoe
Shinmera: exactly, (lambda () (free-static-vector (static-vector instance))) would be a mistake.
11:23:54
Shinmera
What I like to do is this kind of scheme: https://github.com/Shirakumo/cl-mixed/blob/master/c-object.lisp
11:25:13
phoe
CL-LZMA will get a pretty big update, including an API change that makes use of static vectors.
11:25:15
Shinmera
Point being a generic function that returns a closure that "knows" how to finalise that object's resources.
11:25:55
phoe
Since right now it's 95% CL overhead coming from coercing vectors from (UNSIGNED-BYTE 8) to T and the other way around
12:31:31
osune_
I have (soon) a long running CL application. I want to execute a function with arguments at a specific date and time. This date can be even years in the future. There will be many dates when this function should execute, with different arguments. I feel uncomfortable to use a normal timer for such use case. A workaround would be to use CRON and let it talk to the the CL App via sockets which can be selected. But what are possible
12:32:37
jdz
Have a thread that sleeps most of the time, or have a main loop that periodically checks the schedule.
12:33:24
jdz
There might be platform-specific ways to also schedule a signal (alert) for the process.
12:34:05
Shinmera
Just sleep like half of the time you need to the next event until you're within a certain range of acceptability.
12:34:32
osune_
jdz: my problem with the "check shedule periodically" approach would be what my "Nyquist Frequncy" for sampling would be. But platform specific solutions are ok
12:34:54
Shinmera
(loop for diff = (- (get-universal-time) target-time) until (< diff 60) do (sleep (/ diff 2))
12:39:28
tfb
osune_: what's the problem with just checking when the next event is and then sleeping until then? The repeatedly-wake-and-sleep thing seems like ... hard work.
12:40:38
phoe
osune_: there was a thing with minion where you could ask him to remind you to do something in 140380 seconds or something (:
12:41:30
tfb
osune_: every time the schedule changes you need to wake the sleeping thread in any case (same if the whole system is asleep or is restarted which it will be since you're talking about years)
12:43:00
osune_
phoe: I assume I should look at the code for minion. Or are you suggesting I should make requests against minion via IRC ? ;)
12:43:32
Shinmera
Alternatively you can just sleep for a second in a thread and then check all events.
12:44:28
Shinmera
osune_: Then just loop sleep a second and check each event, with events sorted by target time to short-circuit.
12:45:06
Xach
When I had the same problem I made a heap data structure and slept until the next minimum item.
12:49:41
osune_
phoe: minion doesn't have the command anymore, looking at the help-text in this repository https://github.com/stassats/lisp-bots/blob/c39f7e793ef96c3a0715413af404a16faf4f47f8/minion/minion.lisp#L340
12:50:08
phoe
osune_: ow. Well then, obviously you need to write your own IRC bot in order for your thread to be able to sleep. ;)
12:57:03
osune_
jdz: it's posix.1-2001. Even if posix.1-2008 supersedes it, it seems to be the way to go for now
12:59:25
jdz
osune_: my man page says «Applications should use the timer_gettime() and timer_settime() functions instead of the obsolescent getitimer() and setitimer() functions, respectively.», POSIX.1-2008.
13:01:12
osune_
jdz: mine says basically the same. But as the kernel 'never' brakes userspace this is a bit ugly but somewhat a non-issue, no?
13:04:13
phoe
I have a function which accepts a stream as a parameter. What is the canonical way of checking if this stream outputs (unsigned-byte 8)s?
13:05:36
osune_
I would assume that the interface is bound to the kernel, as these are systemcalls which expose timer functionality of the kernel to the user. So deprication warnings for posix interfaces are ugly. But apart from that i don't see a problem using sb-ext:timer?