OMG, someone on LW with longer timelines than me.
I intend to do the other stuff after finishing my PhD - though its not guaranteed I'll follow through.
Curious where you are in your PhD, and if it finished, whether you're aiming at bigger boosts.
No because, like, for example, you can ask followup questions of a human, and they'll give outputs that come from the result of thinking humanly, which includes processes LLMs currently can't do.
As I wrote, if you actually carefully review it, you will end up changing a lot of it.
Its already a dumpster fire, right? LLMs might be generating burning garbage, but if they do so more cheaply than the burning garbage generated by humans then maybe its still a win??
I mean, ok?
Yeah I think uncritically reading a 2 sentence summary-gloss from a medium or low traffic wiki page and regurgitating it without citing your source is comparably bad to covertly including LLM paragraphs.
As I mentioned elsewhere in the comments, the OP is centrally about good discourse in general, and LLMs are only the most obvious foil (and something I had a rant in me about).
And I could say "I asked Grok and didn't do any fact checking, but maybe it helps you to know that he said: <copypasta>" and the attribution/plagiarism concerns would be solved.
I mean, ok, but I might want to block you, because I might pretty easily come to believe that you aren't well-calibrated about when that's useful. I think it is fairly similar to googling something for me; it definitely COULD be helpful, but could also be annoying. Like, maybe you have that one friend / acquaintance who knows you've worked on "something involving AI" and sends you articles about, like, datacenter water usage or [insert thing only the slightest bit related to what you care about] or something, asking "So what about this??" and you might care about them and not be rude or judgemental but it is still them injecting a bit of noise, if you see what I mean.
I discuss something related in the post, and as I said, I agree that if in fact you check the LLM output really hard, in such a manner that you would actually change the text substantively on any of a dozen or a hundred points if the text was wrong, but you don't change anything because it's actually correct, then my objection is quantitatively lessened.
I do however think that there's a bunch of really obvious ways that my argument does go through. People have given some examples in the comment, e.g. the LLM could tell a story that's plausibly true, and happens to be actually true of some people, and some of those people generate that story with their LLM and post it. But I want to know who would generate that themselves without LLMs. (Also again in real life people would just present LLM's testimony-lookalike text as though it is their testimony.) The issue with the GLUT is that it's a huge amount of info, hence immensely improbable to generate randomly. An issue here is that text may have only a few bits of "relevant info", so it's not astronomically unlikely to generate a lookalike. Cf. Monty Hall problem; 1/3 or 2/3 or something of participants find themselves in a game-state where they actually need to know the algorithm that the host follows!
(I'm not saying "give me the prompt so I can give it to an LLM", I'm saying "just tell me the shorter raw version that you would have used as a prompt". Like if you want to prompt "please write three paragraphs explaining why countries have to enforce borders", you could just send me "countries have to enforce borders". I don't need the LLM slop from that. (If you ask the LLM for concrete examples of when countries do and don't have to enforce borders, and then you curate and verify them and explain how they demonstrate some abstract thing, then that seems fine/good.))
everything you write about LLM text is true also of human-written text posted anonymously.
Certainly not. You could interrogate that person and they might respond if they want, which gets some of the benefits; you can see their life-connected models shining through; etc. But yes there are many overlaps.
Is this LLM-generated? My eyes glazed over in about 3 seconds.
That's just wrong, but I don't want to prosecute that argument here. I hope you'll eventually realize that it's wrong.