libera/#shirakumo - IRC Chatlog
Search
6:49:42
Colleen
<shinmera> |3b|: As for the structure, I want to get an answer to the question "which groups of objects may be involved in a collision with each other?" which we can approximate with the question "what groups of objects are there where each object within a group is within a distance of at least one other object within the group"
7:48:10
Colleen
<|3b|> ok, so regular grid would be either 1 group per item, or 1 group containing all. i think i read original question differently from what you intended, which is why i was confused
11:49:48
Colleen
<selwyn> how would you feel about trial source code being used as a prompt to an llm
11:54:08
hayley
"As a large language model, I have no feelings about using the Trial source code. However, it is important to consider ..."
12:27:26
Colleen
<shinmera> gingerale: Okey, things are integrated now and there's a very primitive test suite as well.
12:28:25
Colleen
<shinmera> Right now pretty much every container is required to also hold a hash table where they keep data about the object at last update/insert
12:28:54
Colleen
<shinmera> I wonder if it would be better to instead force users to supply a previous location to UPDATE so that this is no longer necessary.
12:29:22
Colleen
<shinmera> the disadvantage being that the user has more work and that changes in size aren't caught by this.
12:31:54
Colleen
<selwyn> theres the prospect of giving it the hyperspec and/or some libraries as a big prompt
12:31:57
Colleen
<shinmera> oh, so you mean: what do you think of the legality of using source code spit out by gpt.
12:32:35
Colleen
<selwyn> as the fact that it is your code, and there are some concerns about how these companies behave
12:34:08
Colleen
<selwyn> as i understand it there would be no training stage (yet) though they surely store the conversations
12:35:19
Colleen
<shinmera> My thoughts on this are: 1. microsoft and other "machine learning" capitalist companies belong burned to the ground 2. AI bros should be guillotined 3. what I want has very little bearing on what will happen, my code is probably already in copilot and other garbage 4. I question the legality of using the output of GPT for code that you publish, so I would rather not include any such code in Shirakumo repositories.
12:35:59
Colleen
<shinmera> That all said, if you're going to do it, I probably won't notice, and don't particularly care.
12:37:12
Colleen
<shinmera> I personally would be much more interested in a system that used "standard" voice recognition to do the code input, rather than hoping GPT will somehow magically make this method convenient.
12:39:09
Colleen
<shinmera> I personally just feel uncomfortable with the idea of a program basically doing the equivalent of studenst plagiarising wikipedia for you.
12:39:33
Colleen
<shinmera> So imo any code output by such things should be assumed to be plagiarised.
12:40:04
hayley
The university sent us a fair bit of advice on using large language models. We are told to cite e.g. ChatGPT and Bing as if it were private communication.
12:40:39
Colleen
<shinmera> selwyn: anything that doesn't try to be free form and instead is tightly governed by user input.
12:41:00
Colleen
<shinmera> voice recognition will also use "machine learning" methods, but it won't try to infer more than what you say.
12:41:32
Colleen
<shinmera> I think using the controller buttons and gestures you could do stuff like handle paren management to ease the input
12:42:33
Colleen
<selwyn> i wanted to create a command language of about 100 syllables to represent emacs commands, mostly paredit
12:42:57
hayley
I suggested to stylewarning to add a "Wadler, take the wheel" thing to Coalton, which just tries to come up with an expression of the right type. Not easy to implement, nor would the humour justify it though.
12:43:51
Colleen
<selwyn> llms allow for the possibility to say out loud 'computer put a cube in the corner' and it spits out '(trial:enter ...)'
12:45:14
hayley
Did ask GPT to write a commit message for me. Didn't use it, but it wasn't offensively wrong.
12:46:12
hayley
It could also sketch out a shitty mark sweep, and then reinvented prefetch-on-grey after I asked for prefetching, and told it to prefetch earlier than just before tracing.
12:49:44
hayley
Confessed my benchmarking sins to the MMTk people today. Really need to defrag stuff; I couldn't get GPT to write a working copying GC.
12:52:00
hayley
There was some confusion as I forgot to say that I fixed the RNG and load balancing of my fuzz tester to make it deterministic. Not too keen on me running benchmarks in a virtual machine, but it wasn't easy to even get a VM, so.
13:04:01
hayley
There is also the classic incremental incrementally compacting garbage collection, if one incremental isn't incremental enough.
13:22:11
hayley
I only use the Gmail app out of all the Google ones, which is plain laziness on my behalf. Guess they can't really sell sbcl-devel.
13:56:19
Colleen
<shinmera> Error reporting is one of those things that could be improved quite drastically