LESSWRONG
LW

995
TsviBT
8582Ω57657103793
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
7TsviBT's Shortform
Ω
1y
Ω
141
LLM-generated text is not testimony
TsviBT6h20

That's just wrong, but I don't want to prosecute that argument here. I hope you'll eventually realize that it's wrong.

Reply
Mourning a life without AI
TsviBT1d30

OMG, someone on LW with longer timelines than me.

Reply2
Overview of strong human intelligence amplification methods
TsviBT4d20

I intend to do the other stuff after finishing my PhD - though its not guaranteed I'll follow through.

Curious where you are in your PhD, and if it finished, whether you're aiming at bigger boosts.

Reply
LLM-generated text is not testimony
TsviBT4d21

No because, like, for example, you can ask followup questions of a human, and they'll give outputs that come from the result of thinking humanly, which includes processes LLMs currently can't do.

Reply
LLM-generated text is not testimony
TsviBT5d20

As I wrote, if you actually carefully review it, you will end up changing a lot of it.

Reply
LLM-generated text is not testimony
TsviBT5d40

Its already a dumpster fire, right? LLMs might be generating burning garbage, but if they do so more cheaply than the burning garbage generated by humans then maybe its still a win??

I mean, ok?

Yeah I think uncritically reading a 2 sentence summary-gloss from a medium or low traffic wiki page and regurgitating it without citing your source is comparably bad to covertly including LLM paragraphs.

As I mentioned elsewhere in the comments, the OP is centrally about good discourse in general, and LLMs are only the most obvious foil (and something I had a rant in me about).

And I could say "I asked Grok and didn't do any fact checking, but maybe it helps you to know that he said: <copypasta>" and the attribution/plagiarism concerns would be solved.

I mean, ok, but I might want to block you, because I might pretty easily come to believe that you aren't well-calibrated about when that's useful. I think it is fairly similar to googling something for me; it definitely COULD be helpful, but could also be annoying. Like, maybe you have that one friend / acquaintance who knows you've worked on "something involving AI" and sends you articles about, like, datacenter water usage or [insert thing only the slightest bit related to what you care about] or something, asking "So what about this??" and you might care about them and not be rude or judgemental but it is still them injecting a bit of noise, if you see what I mean.

Reply
LLM-generated text is not testimony
TsviBT5d20

I discuss something related in the post, and as I said, I agree that if in fact you check the LLM output really hard, in such a manner that you would actually change the text substantively on any of a dozen or a hundred points if the text was wrong, but you don't change anything because it's actually correct, then my objection is quantitatively lessened.

I do however think that there's a bunch of really obvious ways that my argument does go through. People have given some examples in the comment, e.g. the LLM could tell a story that's plausibly true, and happens to be actually true of some people, and some of those people generate that story with their LLM and post it. But I want to know who would generate that themselves without LLMs. (Also again in real life people would just present LLM's testimony-lookalike text as though it is their testimony.) The issue with the GLUT is that it's a huge amount of info, hence immensely improbable to generate randomly. An issue here is that text may have only a few bits of "relevant info", so it's not astronomically unlikely to generate a lookalike. Cf. Monty Hall problem; 1/3 or 2/3 or something of participants find themselves in a game-state where they actually need to know the algorithm that the host follows!

Reply
LLM-generated text is not testimony
TsviBT6d71

(I'm not saying "give me the prompt so I can give it to an LLM", I'm saying "just tell me the shorter raw version that you would have used as a prompt". Like if you want to prompt "please write three paragraphs explaining why countries have to enforce borders", you could just send me "countries have to enforce borders". I don't need the LLM slop from that. (If you ask the LLM for concrete examples of when countries do and don't have to enforce borders, and then you curate and verify them and explain how they demonstrate some abstract thing, then that seems fine/good.))

Reply
LLM-generated text is not testimony
TsviBT6d20

everything you write about LLM text is true also of human-written text posted anonymously.

Certainly not. You could interrogate that person and they might respond if they want, which gets some of the benefits; you can see their life-connected models shining through; etc. But yes there are many overlaps.

Reply
LLM-generated text is not testimony
TsviBT6d11

Is this LLM-generated? My eyes glazed over in about 3 seconds.

Reply
Load More
52Escalation and perception
3d
0
39Meta-agentic Prisoner's Dilemmas
Ω
5d
Ω
1
43A prayer for engaging in conflict
7d
0
95LLM-generated text is not testimony
8d
83
138Do confident short timelines make sense?
4mo
76
41A regime-change power-vacuum conjecture about group belief
5mo
16
83Genomic emancipation
5mo
14
80Some reprogenetics-related projects you could help with
5mo
1
76Policy recommendations regarding reproductive technology
6mo
2
59Attend the 2025 Reproductive Frontiers Summit, June 10-12
6mo
0
Load More
Sinclair's Razor
3 days ago
(+18/-30)
Sinclair's Razor
3 days ago
(+836)
Tracking
8 months ago
(+191)
Tracking
8 months ago
(+2/-2)
Tracking
8 months ago
(+1571)
Joint probability distribution
9 years ago
(+850)
Square visualization of probabilities on two events
9 years ago
(+72)