freenode/#lisp - IRC Chatlog
Search
2:51:27
no-defun-allowed
You could use a generic function and do something like (defmethod foo ((bar type) baz) (foo baz bar)), but that isn't "compile-time" and may cause fun call loops if baz is also type.
2:55:33
aeth
specialization-store does type-based dispatch at compile time IF there are type declarations. https://github.com/markcox80/specialization-store/
4:32:11
beach
equwal: There is a general rule in programming that you should use the most specific construct that has the desired effect, so instead of (= 0 ,..) it is preferable to use (zerop ...).
4:34:09
beach
equwal: Normally, a compiler macro should return the original form if it can't alter it, so that the normal function is called instead.
4:38:36
equwal
I think it might not be possible to do this in general without strong typing. How can I return the original form by default if the original form has arguments in the wrong order? So when you send something like (group (list 1 2) (+ 1 1)), things break.
4:40:28
beach
Anyway, if you don't respect the compiler-macro protocol, you might as well put the logic in a function.
4:43:15
beach
That is why, if they detect a situation that they can't do anything about, they return the original form so that the function itself is called at run time.
4:44:57
beach
I am also puzzled by the fact that your compiler macro returns a form that calls GROUP-AUX.
4:45:33
beach
If you know your arguments at compile time, why don't you just compute the result at compile time rather than generating a form that will compute it at run time?
4:48:21
beach
equwal: Are you aware that you can not use your GROUP function/compile-macro when the arguments are variables, like you can't say (group x y)?
4:48:58
beach
oni-on-ion: That seems like a basic truth about programming to me. Every programmer should already know it.
4:50:04
equwal
I thing the compiler macros are meant for optimizing stuff at compile time when constraints are met, so should follow your profound advice.
4:50:11
beach
equwal: They work at compile time, just like macros do. So your arguments N and SOURCE are unevaluated parameters to the form.
4:50:40
beach
equwal: If you have a call such as (GROUP X Y) then they will both be symbols, so the compiler macro will error.
4:51:13
beach
equwal: Therefore, your compiler macro works only when the arguments are known at compile time.
4:51:33
beach
So you might as well compute the entire result then, rather than generating code that computes it at run time.
5:14:06
equwal
Well I have proven to myself that reordering arguments based on type data is way more difficult than I am willing to go for a such a frivilous thing.
5:15:01
beach
equwal: Strong typing means that there is no way an object of a particular type can mistakenly be taken for an object of a different type at run time. Weak typing is the contrary. Static typing means that some of the types are checked at compile time. Dynamic typing means that the types are checked at run time.
5:15:45
beach
equwal: Yes, it breaks with (+ 1 1) because that is not a number. It is a list of three elements, a symbol and two numbers.
5:16:46
beach
equwal: What you can do in your compiler macro is to return a form that will check the types at run time, and call group-aux differently in each case. But then, there is no point of having a compiler macro. You might as well put that logic in the GROUP function itself.
5:18:02
beach
equwal: I think you really need to contemplate what information is available at compile time and what information is only available at run time, and design your code accordingly. Macros and compiler macros deal with information available at compile time.
5:30:13
beach
equwal: The entire idea of reordering argument forms of a function at compile time is bogus. A programmer using your function will rightfully expect the evaluation order of arguments to be respected. Suppose you were able to do what you say, and then the programmer types (GROUP (1+ X) (MAKE-LIST X)).
5:30:46
beach
equwal: You definitely do not want to reorder those arguments, because that will alter the intentions of the programmer.
5:31:36
beach
So there is basically nothing you can do at compile time, other than for constant arguments.
5:34:46
beach
And that is what compiler macros are for. They can identify information available at compile time, and generate alternative code for the form. If no such information is available, then they should return the original form so that the associated function is called as usual at run time.
5:36:26
beach
p_l: The "programming" in "dynamic programming" is probably the same as the one in "linear programming", i.e. it has nothing to do with computer programming, but with establishing a "program" in the sense of a sequence of actions to be taken.
5:37:31
beach
p_l: But I agree that both those terms are confusing these days when the main use of the word "programming" is "writing computer programs".
6:01:42
equwal
It makes sense to me now. I can't reorder arbitrary arguments at compile time, not sure what I was thinking.
6:17:44
aeth
equwal: I think you could do this with the specialization-store library by having a specialization for ((a foo) (b bar) (c baz)) as well as ((a bar) (b foo) (c baz)) etc. generated by a macro
6:36:05
aeth
note that unless you have type declarations (so it's known at compile time) libraries that use these macros (or making them yourself manually with these macros) will be slow
7:08:57
asarch
What does #+(or) mean (line #63)?: http://paste.scsys.co.uk/584262?ln=on&submit=Format+it%21
7:19:48
aeth
iirc it's beause #+NIL would refer to an implementation called NIL, and not false. (or) is NIL (as in false), though
7:36:19
jackdaniel
it is a shame #; didn't get as a standard way to comment the next form. people usually complain that #+(or) is too many parens and they simply put #+nil
7:36:55
jackdaniel
ironic that they say that given they hear the "too many parens" 'argument' over and over again as of why "lisp is not a good language"
7:47:19
jackdaniel
sure, I have no problem with writing #+(or) myself, it just itches me to see #+nil on code written by others
8:21:34
jackdaniel
I'm not a crusader hence never. I'm just pointing out that imho making semantic mistakes and arguing that "it works" is not the most reasonable course of action
8:22:10
jackdaniel
just like I would point out that it is not wise to name a variable number when it contains a string
8:29:05
Nilby
There was a bug in trivial-features on CLisp a while back where it pushed NIL on features, and then all #+nil commented code all of a sudden activated.
9:03:39
lieven
anyways, lisp is no different than most other languages in that regard that it runs on an operating system written in another language
9:06:11
lieven
Diip: http://herpolhode.com/rob/utah2000.pdf # they don't take off because it's a shitload of work to make a new environment with all the tools the user expects
9:07:27
erkin
At this point, C is fast because we make it fast with compilers fine-tuned to all hell and back and hardware-level aids for performance.
9:08:03
lieven
one can even argue that the CPU is mostly emulating the ABI in firmware and is architecturally quite different
9:08:10
Diip
That is what I am asking so is there something about modern day chips that makes using lisp harder on bare metal? i.e lack of hardware assisted gabage collection, what else
9:08:40
erkin
There are various factors, but yes, there's little research done to mitigate this problem to move away from this C-oriented platform.
9:08:57
lieven
no. but if you want to use any of the open source software you're going to have to have a posix layer and these days emulate X/Wayland/Gnome etc
9:09:35
pjb
Diip: But mostly, the problem is that processors don't attach type to the bits. The type depend on the microinstruction used (not even on the register).
9:11:00
lieven
old mainframe stuff like the BS2000 now run emulated on firmware on modern chips and it's fast enough
9:11:08
pjb
Diip: for example, 01000001110 may be interpreted as a character, code 65, while 01000001000 will be interpreted as a fixnum, value 65.
9:11:11
erkin
It'd be really hard to make a Lisp Machine that performs as well as an x86 chip due to the sheer amount of money, research and manpower poured onto it throughout years. It's extremely complicated, especially since the addition of out-of-order execution stuff.
9:11:39
erkin
IBM's POWER chips barely hold up to performance parity and it's because they do OoO too.
9:11:47
pjb
Diip: then if you try to use a mulfix on 01000001110, a lisp machine processor would produce a trap to signal a type error.
9:11:57
MichaelRaskin
I would say the real problem with modern hardware is that all the devices have underdocumented protocols, and whatever documentation there is, it is false
9:12:04
pjb
Diip: on a normal processor, you're fucked, you've multipled #\A by something, which is meaningless.
9:13:09
lieven
still, running genera emulated on alpha emulated on amd64 is still the fastest lispm in history :)
9:18:29
Diip
so does that mean in order to use fully use lisp on a machine, you need to use c or assembly for the kernel?
9:19:19
erkin
People implement Lisps in C because C is portable (or rather because it's ported a lot).
9:19:38
lieven
you can write your kernel in non standard C with a specific compiler or you can write it in non standard lisp with a specific implementation
9:19:48
MichaelRaskin
The problem is _only_ that you need to show something on the display (and video drivers are a huge ton of mess)
9:20:06
lieven
note that when the intel compiler people wanted to be able to use their C compiler to compile the linux kernel they had to implement a lot of gcc specific stuff
9:20:36
erkin
Then again, Torvalds openly admits that he doesn't really care about the C standard. :-)
9:20:57
beach
It would also be interesting to see how much undefined (but traditional) C behavior they rely on.
9:21:19
lieven
well, a kernel needs to be aware of stuff life memory barriers etc, inline asm and the like
9:21:23
TMA
Diip: in the end, the processor (modern or not) just expects some charges and voltages be present on certain places within its silicon at some times.
9:22:08
beach
Diip: The fact that few exist is a consequence of the lack of manpower, now of any imagined discrepancy.
9:22:57
lieven
Diip: the problem is not the kernel. you can always target qemu or vmware to limit the drivers. the problem is that in order to use it as a workstation I'd need a browser with all the myriad extensions these have, a video player to watch pr0n and pirated movies, and a ton of other stuff
9:23:39
TMA
Diip: it does not really matter what was the high (or low) level language used to move the charges there in time. as beach said -- it is the manpower that needs to be allocated to writing the stuff in order for the stuff to be written, there is no inherent mismatch
9:23:40
lieven
Diip: and if you want to reuse the open source variants you need a posix/X/gnome layer
9:24:03
erkin
And it's nigh impossible to write a web browser with adequate feature parity with others from scratch this day, even if you do it in, say, C++ on Linux.
9:24:25
erkin
People writing new OSes just port gcc, then port WebKit, then write a browser around that.
9:24:49
MichaelRaskin
And then Google intentionally prevents Chromium-based Edge from working with some Google sites correctly
9:25:30
Diip
I am trying to find a project to get on but nothing everything seems to require some C or interface with C at some point
9:26:18
MichaelRaskin
lieven: well, they also intentionally break Firefox. And Skype also breaks Firefox
9:26:46
lieven
MichaelRaskin: I am not claiming there are any particular good guys in this play. A plague on all their houses.
9:27:46
MichaelRaskin
Well, Mozilla is strictly better than the other Javascript-capable browser vendors, which does have a lot to do with a low bar
9:29:09
erkin
It turns out there are only two extant graphical web browsers (in useable condition) out there that don't derive from KHTML or Gecko: Dillo and WebSurf.
9:29:25
MichaelRaskin
By the way, Google reCAPTCHA penalises anti-tracking measures more than mistakes in recognition
9:30:20
TMA
Diip: it's like being stuck on an uninhaited island with sufficient resources. you can live there, but you probably won't be as comfortable as in the city with electricity, air conditioning and whatelse
9:31:24
MichaelRaskin
I also think a large subset of websites becomes _more_ useable if CSS is nuked
9:32:06
MichaelRaskin
(I also read most of the stuff I read on the Web by dumping to plain text via cl-html5-parser)
12:15:15
iarebatman
This is absolutely terrible. I just had a night of completely restless sleep, trying to solve some great mystery involving recursive datasets and multiple circular lists. I still have no idea what my brain was doing, but I guarantee it had something to do with CL, so I’m blaming all of you.
12:16:48
jackdaniel
recursive datasets and circular lists are not often used in CL, I'm sure you've dreamt about clojure
12:17:25
jackdaniel
now I can honestly tell people, that they should use cl because clojure causes bad dreams
12:41:39
_death
or (mapcar (lambda (x) (+ x 2)) numbers) .. now, you may different values to be added, say #1=(2 3 . #1#) .. then SERIES may be your friend.. (collect (#M+ (scan t list) (series 2 3)))
12:42:14
phoe
dim: sometimes I have issues remembering which symbols are in CL and which are in Alexandria :D
12:42:40
dim
yeah I'm trying to not use Alexandria that much, but maybe I should just accept it as a kind of a CDR that completes the standard
12:44:59
dim
what I like about uiop is that it's already there in your implementation of choice usually, nothing extra to install on-top of it, one less build-dependency
12:46:12
phoe
(length (ql:who-depends-on "alexandria")) ;=> 682 (not counting transitive dependencies)
12:46:45
jackdaniel
uiop (and asdf) are in a matter of fact quite a dependency if put in the executable
12:49:07
dim
_death: slurping is good enough in a minority of cases, I agree with you for the general case
12:49:31
jackdaniel
it depends. i.e on sbcl / ccl you ma do concatenate-source-op and then load it in a fresh image. on ecl compilation and runtime environment have better distinction so you can compile system with asdf and have nothing of it in the executable
12:53:11
_death
here again SERIES may be your friend, by the way, as it has SCAN-FILE and SCAN-STREAM
13:07:40
pfdietz
I'd like to sit down with MichaelRaskin sometime and talk about use cases for code walkers.
13:10:51
vms14
(defun oh (x) (labels ((meh (z) (if (> z 1) (progn (princ z) (print x) (meh (1- z))))))(meh x)))
13:12:01
pfdietz
I keep coming up with code where I walk over lisp forms. Unlike macroexpand-all, I don't want to expand the macros. That's not possible in general, but I feel like there's a utility there struggling to get out.
13:12:59
MichaelRaskin
Because my code-walking paper explains that there are a lot of things that are possible with macrolet that just happen to be underappreciated
13:13:10
pfdietz
Currently, I'm working on a mutation testing utility for lisp. This involves walking function definitions and mutating them, then seeing if the test suite catches the mutations.
13:13:50
pfdietz
I've used macrolet in the past for passing down information from surrounding scopes at compile time, using ENV as a ghetto symbol table. Very handy.
13:15:00
pfdietz
Back to mutation testing: the goal there is to mutate the code without expanding it too much (and you don't want to mutate the glue code in the macroexpansions). Current approach adds methods for walking macro forms.
13:15:29
pfdietz
Sufficiently complex macros have to be expanded anyway, but "simple" ones can be handled more cleanly.
13:16:09
pfdietz
In this situation it's ok to screw up. If the mutated function doesn't compile, toss it out. No big loss.
13:16:41
pfdietz
So some analysis during the code walking is useful to detect when a mutation is bad in that sense.
13:16:47
MichaelRaskin
I mean, you could enumerate the sub-s-expressions of the macro, then expand, then find the sub-s-expressions eq to some original ones
13:17:41
MichaelRaskin
pfdietz: whatever you do, there are mutations that leave code correct unexpectedly…
13:18:08
pfdietz
Another case: in my random tester, I come up with big random lisp forms that expose compiler bugs. After I find them, I want to reduce them to minimal forms that still show the bug. This involves walking.
13:18:43
MichaelRaskin
Please please please don't tell me this includes compiler bugs in macroexpansion
13:18:52
pfdietz
Right, but you want to try to bias the mutations away from those. Perfect isn't needed.
13:19:52
MichaelRaskin
Well, I would start with macroexpand-all, and if the bug is still there, you are in luck
13:20:34
pfdietz
And yet another case: the old Waters' COVER package. It uses symbol shadowing and macros to implement code coverage annotations. Unfortunately it doesn't work with certain macros, like ITERATE, which also walk.
13:20:57
MichaelRaskin
What type of test suite are you interested in? Ultra-unit-test type that call with specific arguments and assert exact equality of output, or property-checking?
13:21:56
pfdietz
For mutation testing, it doesn't matter what the test suites are. For my random tester, I generate individual functions and look for crashes or behavioral differences between optimize settings (and inline/notinline, type decls no types, etc.)
13:23:13
pfdietz
The mutation testing walker would benefit from having some compiler-like information at walk time. For example, knowing that variable X is never assigned to, or that X and Y always have the same value.
13:23:31
MichaelRaskin
Well, for property-based test suites there is often a lot of function calls; so if mutated function always returns equal values to unmutated ones it is a different situation from returning different values but somehow passing the test
13:25:04
pfdietz
The point of mutation testing is not to test the mutated function, it's to evaluate the adequacy of the test suite. So if the test suite is not checking for the right return values it would be inadequate.
13:25:55
MichaelRaskin
pfdietz: it also depends on the task, sometimes you do not want to check for precise output values
13:27:59
MichaelRaskin
I know that mutation testing is for coverage estimation; but some structures of test suites are more likely to provide some kinds of information
13:28:31
pfdietz
The other thing the mutation tester needs is a way to capture lexical information for reuse when a function is mutated.
13:28:57
pfdietz
(let ((x ...)) (defun foo () ...)) ==> you want the X to remain the same when you redefine FOO
13:29:30
pfdietz
Better seen if there are several DEFUNs in that same lexical scope, communicating through those lexical variables.
13:30:10
pfdietz
I can hack this, if I can get a list of the lexical variables when DEFUN is macroexpanded.
13:30:51
MichaelRaskin
Well, if you walk all that with agnostic-lizard from the top level, you have the list of lexical variables in a portable way
13:31:40
pfdietz
You'd macroexpand the form and look for all its vars, and see which ones are in the top level env?
13:32:46
MichaelRaskin
For each form that changes the list of locals, I made sure to stow away the thunks to read/write the new locals
13:36:39
pfdietz
For the mutator it's just necessary to set up a symbol-macrolet for the lexical vars at the DEFUN (and appropriate FLET functions for reading/setf-ing).
13:37:50
MichaelRaskin
If I were doing it, I would just mutate the code of the entire file and reevaluate all the things
13:40:01
pfdietz
That's one way to do mutation testing: "supermutants" that are controlled by some special variable. Lots of CASE statement in the code to control which mutant is being activated.
13:50:48
MichaelRaskin
I meant the other direction: just a set of whole-file mutated reloads, maybe living in different packages
14:09:52
pfdietz
I wonder how well macroexpand hooks stack (I assume you were using *macroexpand-hook*).