freenode/#lisp - IRC Chatlog
Search
19:09:59
phoe
gosh, I wrote over a thousand lines of markdown over the course of the last three days
19:54:24
ealfonso
anyone familiar with stefil know how to clear/delete previously defined tests in a suite?
21:34:46
PuercoPop
ealfonso: following ensure-test points to the *TESTS* variable. Also FIND-TEST is SETFable. So (setf (find-test 'my-test) nil) should work
21:37:11
pfdietz_
Is there a CL test framework for property-based testing? That is, it allows descriptions of properties that some piece of software must have, and (separately) ways of generating inputs for the software.
22:13:13
ealfonso
when I saw an array returned from drakma:http-request, I thought it had interpreted the application/json content type and automatically parsed response JSON to an array... I was wrong. apparently I have to add the hack: (push (cons "application" "json") drakma:*text-content-types*) suggested here https://sites.google.com/site/sabraonthehill/home/json-libraries
23:03:24
pierpa
I like CL's punning. The problem only occurs when interfacing with a format that chose a different set of punnings.
23:28:11
antoszka
Is there a reader macro out there for ingesting/operating on IP addresses written in decimal notation? (and keeping them internally as 32-bit integers as they are?)
23:29:58
Bike
not that i'm aware of. i think socket libraries tend to use vectors as addresses, but i could be wrong
23:32:58
antoszka
Bike: oh, okay, any particular socket library you have in mind? Do you think it'd be useful to write a macro like this?
23:34:07
Bike
e.g. (sb-bsd-sockets:host-ent-address (sb-bsd-sockets:get-host-by-name "google.com")) => #(172 217 3 110)
23:34:47
Bike
as for a reader macro, id on't know, i don't deal with that kind of stuff much, but i thought hardcoded ip addresses weren't common
23:53:50
pierpa
There's only a finite number of characters, and very few of them are usable easily. I wouldn't waste one for this macro. And the convenience would be infinitesimal anyway.
0:01:50
pjb
pierpa: notice that with emacs, you can easily bind any unicode character to an easy key or key-chord.
0:04:20
pjb
pierpa: of course, with emacs, you can also have the convenience the other way: display a long-name as a single unicode character (with the compose operator, see eg. https://www.emacswiki.org/emacs/PrettyLambda
0:08:59
Josh_2
It's listed in the SBCL 1.4.6 manual but it isn't being recognized by the SBCL I'm using atm
0:18:59
antoszka
pierpa: guess I can see one piece missing, there are make-inet{,6}-address functions for parsing the string, but there are no equivalent print functions, maybe that could use some work.
0:55:14
ealfonso
I have N long-running threads performing some work. occasionally I'd like to peek into the current state of the work from an event-driven thread. I've thought about having each thread write to a global hash table but is there a better way?
0:58:34
ealfonso
the event requires the long-running thread to stop, compute an serializable state object, then continue altering the state
5:29:59
phoe
Hm. FIVEAM:MAKE-FIXTURE and FIVEAM:MAKE-TEST are exported symbols but have no definition.
6:02:58
blurgh
Would Lisp being tree-based instead of list-based remove the need for CDR coding and other tricks to get it to run on bare metal?
6:05:41
beach
blurgh: Modern processors are perfectly capable of running Common Lisp perfectly well.
6:07:12
blurgh
beach: True, but they're not designed for it. Technically, you could run a truely foreign language like Clean or something based on cellular automata, but would that yield appreciable speed? (OK, maybe Clean would be fast on bare metal given how fast it is already)
6:07:50
beach
blurgh: SBCL is capable of generating very fast native code. I don't know where you get the idea that this is not possible.
6:08:18
phoe
They aren't designed for running Java either which doesn't prevent it from flourishing on x86_64 and more and more optimizations making its way into the JVM all the time
6:08:30
beach
blurgh: Furthermore, there is nothing special about running on bare metal. The same code generator can be used, with only minor modifications.
6:09:27
beach
blurgh: Or, perhaps by "bare metal" you don't mean "without any operating system", and instead you mean "running native code"?
6:10:02
beach
blurgh: If so, then it is already done, and has been for decades. Most modern Common Lisp system generate native code on the fly.
6:10:23
blurgh
phoe: C maps 1-to-1 to a Von Neumann architecture computer with a single core. Even if it's a lie now (multiple cores, parallel stuff, etc), it's still faster than any other language. Lisp can come close to C's speed, but can match it only with judicious use of "call 'dissasemble', optimize by hand".
6:11:43
beach
blurgh: Can you show us some reference to the claim that Common Lisp "can match it only with judicious use of "call 'dissasemble', optimize by hand"?
6:11:51
blurgh
beach: messing around with it, every single benchmark I've ever seen. Lisp /is/ fast, but it's not C.
6:11:56
phoe
Sure thing, but raw machine speed at all costs, including programmer time, debugging convenience, no introspection and memory unsafety isn't what I'm after.
6:11:56
aeth
blurgh: It's easier to write an optimized compiler than to write hardware optimized for a language. So lisp machines arw forever dead, but Lisp runs well on modern architectures without a big mismatch
6:12:32
beach
blurgh: You are making claims about the possibilities, but you only look at existing implementation.
6:12:55
aeth
blurgh: C is fast because tons of money goes into C compilers and because C design chooses a low memory overhead and fast execution speed over literally everything else, including nice things like some degree of safety
6:13:43
beach
blurgh: Furthermore, different languages are good for different things. Try using C for something that requires a lot of memory allocation, and you will see that malloc()/free() is much slower than any modern garbage collector.
6:14:23
blurgh
beach: of course calling disassemble and then optimizing it will result in faster code. Doing this automatically at runtime with a JIT is why Julia is frequently as fast as C. aeth: yes, that's probably true. Stuff like bignums are expensive, but the Right Thing nevertheless.
6:16:30
beach
blurgh: Good. Then you should know that for programs that do roughly the same things in C and Common Lisp, then the speed is also comparable. The problem is that most programs don't do the same thing.
6:19:11
blurgh
beach: Then why is it consistently slower in the Language Shootout and blog post tests?
6:19:21
blurgh
http://jng.imagine27.com/index.php/2009-05-03-195227_i_want_to_believe_in_lisp_performance.html
6:19:37
jackdaniel
you can write code fast in Common Lisp and you can write fast code in Common Lisp
6:20:30
aeth
You can make any language fast with enough money. And what will get the money? Languages used by the industry.
6:21:17
aeth
(The amount of money will vary based on the language, but getting good performance out of CL is probably easier than with JS.)
6:21:36
beach
blurgh: I already told you at least two reasons. A typical Common Lisp program will do more things than a C program because most C compilers exploit the fact that the standard allows them to elide things like boundary checks, whereas most Common Lisp compilers generate checks for such things.
6:21:37
beach
Furthermore, as I told you, the fact that Common Lisp is capable of being as fast, doesn't mean that current implementations (that are maintained by volunteers instead of by big corporations) live up to that capability.
6:21:55
jackdaniel
some adventages become obvious only after program goes above some complexity level
6:22:15
beach
blurgh: Again, you look at existing implementations, but you make claims about what is possible for the language as such.
6:24:38
cess11_
What is "speed"? For whom is numbercrunching throughput the only interesting metric?
6:24:47
beach
blurgh: The real question here is whether it is worth programming in C where the programs are vulnerable to various attacks just to gain a bit of performance, or whether you prefer safe code to make you as a programmer more productive at the cost of a little more execution time.
6:25:01
blurgh
jackdaniel: Yes, it is fast. Being within an order of magnitude of C is very impressive for any language. aeth: That's probably true. Nevertheless, Clean is faster than SBCL and only slightly slower than C, while being a rather general language implementation. STALIN and MLton both beat C, but are impractical. What structural features could be improved in Lisp?
6:25:26
beach
blurgh: Maybe you are just embarrassed about the language shootout? Did someone confront you with it, and you were unable to defend yourself?
6:26:27
blurgh
beach: C is unacceptable as a language for serious projects. That's why Lisp needs to be improved. And no, I'm just mulling over things. I've written my own Scheme and have generally been thinking about doing something new.
6:26:28
beach
blurgh: There are no improvements to Common Lisp required. What we need is more people to improve existing implementations. You keep confusing language and implementation. Maybe I am not being clear enough on that point?
6:28:12
jackdaniel
blurgh: it is covered in one of PG essays – CL has numerous orthogonal features (which are gradually adopted to other languages as well; except maybe macros which are hard for non-sexp syntax) – the structural adventage is that these features support each other and may be used to improve the program
6:28:16
beach
blurgh: You seem to be convinced that Common Lisp needs to be improved, and that is what you also started by saying (tree based instead of list based, whatever that means), but there is no evidence to support this claim.
6:28:18
blurgh
beach: A lot of admirable work has been done in Common Lisp. It's something of a sum of what's been tried and what's failed in Lisp over the years.
6:28:47
cess11_
I look forward to your future achievements, I'm sure you will revolutionise computing science.
6:28:52
jackdaniel
in other languages, where you adopt some feature, it often feels off in it – it may be not well suited for it
6:29:43
jackdaniel
regarding improving Common Lisp – I wouldn't mind if remove-if-not had disappeared ;-)
6:31:40
aeth
jackdaniel: Removing #'remove-if-not requires #'remove-if with #'complement to be optimized
6:33:29
jackdaniel
I hoped that ";-)" will indicate a joke – removing a single symbol from CL standard wouldn't give us anything except rendering wide range of programs invalid
6:34:19
beach
blurgh: Can you define what you mean by a "list-based language" and a "tree-based language". The only thing that is "list based" in Common Lisp is the representation of source code, and that has absolutely no impact on the performance of the generated code.
6:35:44
aeth
blurgh: Lisp isn't LISt Processing these days. It has arrays, structs, CLOS objects, hash-tables, first class functions, etc.
6:36:24
aeth
If you primarily use lists, that might be why you think Lisp is slow. Lists are... slow in Lisp. (Doesn't really matter if it's done at compile time with macros, though. Still compiles way faster than C++)
6:36:52
aeth
You're not supposed to use lists for everything, which is why they're very straightforward without clever optimizations
6:37:12
jackdaniel
blurgh: regarding structural differences: http://paulgraham.com/diff.html ; while PG doesn't like CL anymore many of his essays are good (he is a good writer)
6:37:45
jackdaniel
"revenge of the nerds" has all these points listed in a more elaborate manner I think
6:38:13
aeth
I disagree. I liked his essays back in the day (2012 or so?) but I don't agree with many now.
6:41:56
blurgh
aeth: I know it has other features. Take a look at Refal. Between supercompilation and efficient term-rewriting, it's apparently always been very fast. It's something like a '60s Haskell that ended up on the wrong side of the Cold War.
6:42:37
beach
blurgh: So you are just not going to address the issues with you opinion, nor answer the question we asked, and just keep claiming that Common Lisp needs to be improved in order for compilers to be able to generate fast code?
6:43:29
aeth
blurgh: The core of CL is very efficient. Basic CL is just a bunch of thin macros on top of tagbody and go. It's... very close to how the hardware works.
6:44:15
aeth
loop obscures it a bit more than the other ways to iterate, but dotimes and do are very straightforward if you macroexpand them.
6:44:45
aeth
s/very close to how the hardware works/very close to how the assembly language pretends the hardware works/
6:46:57
blurgh
beach: What question? Why I'm asking this? I showed you 2 straight-up tests, and you said it was as fast as C. I said that it was within an order of magnitude (and that's still very good! It's akin to Java, which has had a lot more work put in) and only matches it with manual trial-and-error optimization.
6:47:24
beach
blurgh: Can you define what you mean by a "list-based language" and a "tree-based language".
6:48:27
cess11_
beach: They mean that building a list from both ends at the same time matters in program efficiency post-seventies, I think.
6:49:19
beach
cess11_: I am specifically asking about the LANGUAGE, i.e. what it means for a LANGUAGE to be list based or tree based.
6:49:19
aeth
blurgh: Optimizing a modern CL AOT-compiled implementation is pretty straightfoward, actually. What fools the type inference is some built-in type-generic functions (not CLOS generic) for sequences and numbers like #'map and #'+ because for the rest, the compiler can usually infer that it's going to either be that type or an error (e.g. #'car or #'maphash)
6:49:48
aeth
blurgh: So when you use something like #'map or #'+, you're probably going to have to declare the type (or, more portably, use check-type) to make sure that the compiler has the information that it needs.
6:50:22
aeth
Oh, and arrays have an additional slowness of bounds-checking that can sometimes be avoided if the full type (which includes the length) is given.
6:50:23
blurgh
beach: a tree-based language is one like Refal - the basic structure is a list which can be built from both ends and pattern-matched down to be reduced by partial evaluation.
6:51:32
blurgh
beach: basically, the compiler already knows the properties of whatever's in the tree beneath a root expression and can optimize from there. There's also the Lorax language (experimental, idk if you can find the paper) which does something similar to generate efficient code.
6:51:35
beach
blurgh: And what makes you think that Common Lisp is "list based" then? More specifically, why do you think the fact that it is "list based" has an impact on performance? Also, what makes you think that it is not possible to use such a data structure in Common Lisp, should that be required?
6:52:20
beach
blurgh: So now you are talking about the performance of the compiler? As opposed to the performance of the code generated by it?
6:53:38
blurgh
beach: In Lisp, you can have a list with a hashtable symbol, a tree symbol, and a graph symbol. Nevertheless, you still have to traverse it with car and cdr ultimately. It makes it much harder to optimize things away, like how many purely functional data structures can be built from finger trees and reduced to a common representation.
6:54:10
beach
blurgh: What on earth makes you think that one HAS TO program with lists in Common Lisp?
6:54:20
aeth
blurgh: https://gitlab.com/zombie-raptor/zombie-raptor/blob/3b9118cc853f5adfed691f50a8537b7687c2509b/util/util.lisp#L308-360
6:54:40
beach
blurgh: Do you seriously believe that the EXISTENCE of lists in Common Lisp determined the result of the language shootout?
6:55:43
aeth
You could probably make a typed cons (for trees) similarly out of that typed list macro fairly easily.
6:55:46
blurgh
beach: basically, alpha equivalence matters is all I'm saying. That allows a lot of funky optimization-by-substitution. Part of the reason why the fastest high-level languages are purely functional is becausing being purely functional lets you close the gap a little more.
6:56:22
aeth
blurgh: Purely functional programming languages are going to be *slower* unless you're dealing with threads, afaik.
6:56:36
beach
blurgh: No, that is not all you are saying. You make sweeping claims that you then are unable to support by evidence, and not even by reasoning.
6:57:06
aeth
blurgh: Or at least enough people didn't buy into functional programming until multithreading took off.
6:57:23
White_Flame
aeth: In theory, a function language with a Sufficiently Advanced Compiler could convert things to mutating behind the scenes and achieve comparable speed
6:58:13
White_Flame
but really, those sorts of optimizations tend to happen at the application level, not the language level, so such a compiler concept is pretty out there
6:58:13
blurgh
aeth: Not true. Implicit parallelism is nice, but by that point, you might as well use C because parallel computations are all about C. The real killer is graph reduction (or closure reduction, which is what I've heard Haskell's STG uses). White_Flame: That's exactly my point. In fact, it's been done before in the '80s. It just required special hardware.