freenode/#lisp - IRC Chatlog
Search
21:25:15
aeth
Am I being naive if I store an immutable graph like this? 1 <-> 2 <-> 3 and 1 <-> 4 would be #(2 4 1 3 2 1) and #2A((0 1) (2 3) (4 4) (5 5)) where the 1D array is the connections and the 2D array is essentially the key that says what range the (n+1)th element is in (since the graph here starts with 1)
21:29:41
aeth
I'm not using lists because of random access, e.g. I can look up an item (and there might be thousands) as (subseq connections (aref range (1- item) 0) (aref range (1- item) 1))
21:31:08
aeth
(Except subseq is probably unnecessary in every actual use because most sequence functions have a start and an end, which is why I'd be storing the start and end)
22:33:22
grewal
aeth: why do you need two arrays instead of one? You can just store the nodes each node is connected to. Using your example, something like #(#(2 4) #(1 3) #(2) #(1))
22:37:28
vms14
(defclass meh () ((oh :accessor oh))) this won't let me push on this slot unless I put :initform nil
23:06:13
aeth
grewal: You can't, rows have to be the same length. I could pad it with -1s (or 0s, if I keep them starting at 1) but then I wouldn't know where they end ahead of time
23:06:39
aeth
grewal: It would also be better to use fake 2D arrays rather than actual 2D arrays, so sequence operations could be used with start/end.
23:15:15
grewal
aeth: I didn't use #2a. It's a vector of vectors, not a multi-dimensional array. Ultimately, the two approaches are probably the same under-the-hood, but your approach just seems a bit oblique
23:17:14
Bike
defstruct does not initialize to nil. it's undefined, which is worse than defclass, because if you read an uninitialized structure slot anything could happen
23:20:10
aeth
grewal: I think my solution scales better because two specialized arrays are going to be more efficient than a T array of specialized arrays because the GC will afaik have to iterate through the T array, but not the two specialized arrays.
23:32:25
White_Flame
aeth: more efficient would be to ensure the GC isn't triggered often, and use the most appropriate data structure regardless of its latency effect on GC
23:59:02
vms14
so the way is try to control when the gc is triggered and minimize as possible the work of the gc
23:59:43
aeth
vms14: of course you can't actually not allocate, so it's more choosing where to allocate... so preallocate
0:01:27
aeth
vms14: (defun foobar () (let ((foo (make-foo)) (bar (make-bar))) (do-some-fancy-game-loop foo bar))
0:01:58
aeth
vms14: And the bindings in do-some-fancy-game-loop aren't allocating (well, probably aren't)
0:05:38
aeth
vms14: This is a very Java-oriented way of saying "reuse buffers" like White_Flame said earlier. https://en.wikipedia.org/wiki/Object_pool_pattern
0:29:42
aeth
Me? I've never heard of that before. The website link is dead. This appears to be the source. https://github.com/ilitirit/manardb
0:30:22
aeth
I know a lot of people are proud of being able to use old libraries in Common Lisp, but I'm not sure I'd use a database that hasn't been updated in 10 years.
0:31:56
aeth
There's a handful of things that are just inherently really hard to write: operating systems, web browsers, databases, office suites, IDEs (and most content editors in general that are fancier than a basic text editor), game engines, etc.
0:32:59
aeth
Databases are pretty important, too, because you don't want your data to be corrupted or lost.
0:56:24
White_Flame
my first thought was actually destructuring-bind, but you'd still need to name intermediate value holders
0:56:50
vms14
also I should use :initargs instead of that bunch of setfs and will be even better than use pop
1:01:47
rtypo
i also tried sqlite last week, what i do differently is run every query inside 'with-open-database'
1:03:57
rtypo
i think i'm just paranoid with managing connections, since i'm bad at using databases :D
1:06:10
vms14
but I start by making first a simple forum, the most difficult thing is mantain data organized and know what kind of data I need
1:14:02
Nilby
I'm a terrible programmer, but that hasn't stopped me from writing 100k lines of semi-working Lisp.
1:14:29
vms14
I guess the better is to make a prototype and then write a new program taking that prototype as a reference
1:26:34
vms14
the comic is about the programmers having bugs, and those bugs evolve and end slaving the humanity, only lisp can save us
1:30:06
Nilby
After watching that video, I now realize I've been writing "Balance weasels on a rake".
1:54:05
equwal
So is there a way to generally reorder arguments to a function at compile time with a macro, based on the type?
2:01:42
Bike
if you really want to, you can usually get at the declared type information through semiportable mechanisms
2:51:27
no-defun-allowed
You could use a generic function and do something like (defmethod foo ((bar type) baz) (foo baz bar)), but that isn't "compile-time" and may cause fun call loops if baz is also type.
2:55:33
aeth
specialization-store does type-based dispatch at compile time IF there are type declarations. https://github.com/markcox80/specialization-store/
4:32:11
beach
equwal: There is a general rule in programming that you should use the most specific construct that has the desired effect, so instead of (= 0 ,..) it is preferable to use (zerop ...).
4:34:09
beach
equwal: Normally, a compiler macro should return the original form if it can't alter it, so that the normal function is called instead.
4:38:36
equwal
I think it might not be possible to do this in general without strong typing. How can I return the original form by default if the original form has arguments in the wrong order? So when you send something like (group (list 1 2) (+ 1 1)), things break.
4:40:28
beach
Anyway, if you don't respect the compiler-macro protocol, you might as well put the logic in a function.
4:43:15
beach
That is why, if they detect a situation that they can't do anything about, they return the original form so that the function itself is called at run time.
4:44:57
beach
I am also puzzled by the fact that your compiler macro returns a form that calls GROUP-AUX.
4:45:33
beach
If you know your arguments at compile time, why don't you just compute the result at compile time rather than generating a form that will compute it at run time?
4:48:21
beach
equwal: Are you aware that you can not use your GROUP function/compile-macro when the arguments are variables, like you can't say (group x y)?
4:48:58
beach
oni-on-ion: That seems like a basic truth about programming to me. Every programmer should already know it.
4:50:04
equwal
I thing the compiler macros are meant for optimizing stuff at compile time when constraints are met, so should follow your profound advice.
4:50:11
beach
equwal: They work at compile time, just like macros do. So your arguments N and SOURCE are unevaluated parameters to the form.
4:50:40
beach
equwal: If you have a call such as (GROUP X Y) then they will both be symbols, so the compiler macro will error.
4:51:13
beach
equwal: Therefore, your compiler macro works only when the arguments are known at compile time.
4:51:33
beach
So you might as well compute the entire result then, rather than generating code that computes it at run time.
5:14:06
equwal
Well I have proven to myself that reordering arguments based on type data is way more difficult than I am willing to go for a such a frivilous thing.
5:15:01
beach
equwal: Strong typing means that there is no way an object of a particular type can mistakenly be taken for an object of a different type at run time. Weak typing is the contrary. Static typing means that some of the types are checked at compile time. Dynamic typing means that the types are checked at run time.
5:15:45
beach
equwal: Yes, it breaks with (+ 1 1) because that is not a number. It is a list of three elements, a symbol and two numbers.
5:16:46
beach
equwal: What you can do in your compiler macro is to return a form that will check the types at run time, and call group-aux differently in each case. But then, there is no point of having a compiler macro. You might as well put that logic in the GROUP function itself.
5:18:02
beach
equwal: I think you really need to contemplate what information is available at compile time and what information is only available at run time, and design your code accordingly. Macros and compiler macros deal with information available at compile time.
5:30:13
beach
equwal: The entire idea of reordering argument forms of a function at compile time is bogus. A programmer using your function will rightfully expect the evaluation order of arguments to be respected. Suppose you were able to do what you say, and then the programmer types (GROUP (1+ X) (MAKE-LIST X)).
5:30:46
beach
equwal: You definitely do not want to reorder those arguments, because that will alter the intentions of the programmer.
5:31:36
beach
So there is basically nothing you can do at compile time, other than for constant arguments.
5:34:46
beach
And that is what compiler macros are for. They can identify information available at compile time, and generate alternative code for the form. If no such information is available, then they should return the original form so that the associated function is called as usual at run time.
5:36:26
beach
p_l: The "programming" in "dynamic programming" is probably the same as the one in "linear programming", i.e. it has nothing to do with computer programming, but with establishing a "program" in the sense of a sequence of actions to be taken.
5:37:31
beach
p_l: But I agree that both those terms are confusing these days when the main use of the word "programming" is "writing computer programs".
6:01:42
equwal
It makes sense to me now. I can't reorder arbitrary arguments at compile time, not sure what I was thinking.
6:17:44
aeth
equwal: I think you could do this with the specialization-store library by having a specialization for ((a foo) (b bar) (c baz)) as well as ((a bar) (b foo) (c baz)) etc. generated by a macro
6:36:05
aeth
note that unless you have type declarations (so it's known at compile time) libraries that use these macros (or making them yourself manually with these macros) will be slow
7:08:57
asarch
What does #+(or) mean (line #63)?: http://paste.scsys.co.uk/584262?ln=on&submit=Format+it%21
7:19:48
aeth
iirc it's beause #+NIL would refer to an implementation called NIL, and not false. (or) is NIL (as in false), though
7:36:19
jackdaniel
it is a shame #; didn't get as a standard way to comment the next form. people usually complain that #+(or) is too many parens and they simply put #+nil
7:36:55
jackdaniel
ironic that they say that given they hear the "too many parens" 'argument' over and over again as of why "lisp is not a good language"
7:47:19
jackdaniel
sure, I have no problem with writing #+(or) myself, it just itches me to see #+nil on code written by others
8:21:34
jackdaniel
I'm not a crusader hence never. I'm just pointing out that imho making semantic mistakes and arguing that "it works" is not the most reasonable course of action
8:22:10
jackdaniel
just like I would point out that it is not wise to name a variable number when it contains a string
8:29:05
Nilby
There was a bug in trivial-features on CLisp a while back where it pushed NIL on features, and then all #+nil commented code all of a sudden activated.
9:03:39
lieven
anyways, lisp is no different than most other languages in that regard that it runs on an operating system written in another language
9:06:11
lieven
Diip: http://herpolhode.com/rob/utah2000.pdf # they don't take off because it's a shitload of work to make a new environment with all the tools the user expects
9:07:27
erkin
At this point, C is fast because we make it fast with compilers fine-tuned to all hell and back and hardware-level aids for performance.
9:08:03
lieven
one can even argue that the CPU is mostly emulating the ABI in firmware and is architecturally quite different
9:08:10
Diip
That is what I am asking so is there something about modern day chips that makes using lisp harder on bare metal? i.e lack of hardware assisted gabage collection, what else
9:08:40
erkin
There are various factors, but yes, there's little research done to mitigate this problem to move away from this C-oriented platform.
9:08:57
lieven
no. but if you want to use any of the open source software you're going to have to have a posix layer and these days emulate X/Wayland/Gnome etc
9:09:35
pjb
Diip: But mostly, the problem is that processors don't attach type to the bits. The type depend on the microinstruction used (not even on the register).
9:11:00
lieven
old mainframe stuff like the BS2000 now run emulated on firmware on modern chips and it's fast enough
9:11:08
pjb
Diip: for example, 01000001110 may be interpreted as a character, code 65, while 01000001000 will be interpreted as a fixnum, value 65.
9:11:11
erkin
It'd be really hard to make a Lisp Machine that performs as well as an x86 chip due to the sheer amount of money, research and manpower poured onto it throughout years. It's extremely complicated, especially since the addition of out-of-order execution stuff.
9:11:39
erkin
IBM's POWER chips barely hold up to performance parity and it's because they do OoO too.
9:11:47
pjb
Diip: then if you try to use a mulfix on 01000001110, a lisp machine processor would produce a trap to signal a type error.
9:11:57
MichaelRaskin
I would say the real problem with modern hardware is that all the devices have underdocumented protocols, and whatever documentation there is, it is false
9:12:04
pjb
Diip: on a normal processor, you're fucked, you've multipled #\A by something, which is meaningless.
9:13:09
lieven
still, running genera emulated on alpha emulated on amd64 is still the fastest lispm in history :)