LessWrong, is this rational? I wrote a reply to Elizabeth's open bid for answers to her research questions in good faith. She replies not with anything substantive, but to claim it is written with AI. I'm happy for her to disagree with my answer, but flagging a difference in style to suggest low quality is not what I thought this community was supposed to be about. Cynically, one could suggest she doesn't want to make good on her offer ... For the record, I wrote it late at night and ran the response through an LLM to improve readability for her benefit.
If you don't know what I'm talking about, see the most disliked comment on her post :)
Thank you for your comment. I will highlight specifically which parts are my opinion in the future.
5habryka
I think beyond insightfulness, there is also a "groundedness" component that is different. LLM written writing either lies about personal experience, or is completely absent of references to personal experience. That makes writing usually much less concrete and worse, or actively deceptive.
This is a special post for quick takes by [anonymous]. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
LessWrong, is this rational? I wrote a reply to Elizabeth's open bid for answers to her research questions in good faith. She replies not with anything substantive, but to claim it is written with AI. I'm happy for her to disagree with my answer, but flagging a difference in style to suggest low quality is not what I thought this community was supposed to be about. Cynically, one could suggest she doesn't want to make good on her offer ... For the record, I wrote it late at night and ran the response through an LLM to improve readability for her benefit.
If you don't know what I'm talking about, see the most disliked comment on her post :)
https://www.lesswrong.com/posts/bFvEpE4atK4pPEirx/correct-my-h5n1-research-usdreward
You probably should have said 'yes' when asked if it was AI-written.