I'm an admin of LessWrong. Here are a few things about me.
List of posts that seem promising to me, that are about to fall out of the annual review in 10 mins because only I have voted on them:
I'm nominating this! On skimming, this is a very readable dialogue with an AI about ethics, lots of people seem to have found it valuable to read. I hope to give it a full read and review in the review period.
I appreciated reading this layout of a perspective for uncollaborative truth-seeking discourse, even though I disagree with many parts of it. I'll give it a positive vote here in the last two hours of the nominations period, I hope someone else gives it one too.
Tentative +9, I aim to read/re-read the whole sequence before the final vote and write a more thorough review.
My current quickly written sense of the sequence is that it is a high-effort, thoughtfully written attempt to help people with something like 'generating the true hypotheses' rather than 'evaluating the hypotheses that I already have'. Or 'how to do ontological updates well and on-purpose'.
Skimming the first few posts, there's an art here that I don't see other people talking about unprompted very much (as a general thing one can do well, of course sometimes people talk about having ontological updates) and have not seen written down in detail before and it's so awesome that someone has made a serious attempt.
I haven't read it all but I have seen bits and pieces of the thinking and explanations (and been to a short workshop by Logan), and I think this should definitely go through to the review phase and probably some of the essays (or the sequence as-a-whole) should go into the top of the review.
Recently, I told a friend of mine that I'd been to a wedding. They asked how it was, and I said the couple clearly loved each other very much (as they made clear repeatedly in their speeches). My friend made a face like that I read as some kind of displeasure, a bit of a grimace. Since then, I've been wondering why that was.
I think it's a common occurrence, that people feel negatively about others openly expressing their love for something (a person, a piece of art, a place, etc). I'm pretty sure I've had this feeling myself, but I don't know why.
I can think of two hypotheses.
Anyone got any other hypotheses, or think that they know the answer?
Are there Manifold markets yet on whether this was a suicide and whether it will turn out that this was due to any pressures relating to the OpenAI whistleblowing?
I don't think that Duncan tried to describe what everyone has agreed to, I think he tried to describe the ideal truth-seeking discussion norms, irrespective of this site's current discussion norms.
Added: I guess one can see here what the algorithm he aimed to run, which had elements of both:
In other words, the guidelines are descriptive of good discourse that already exists; here I am attempting to convert them into prescriptions, with some wiggle room and some caveats.
+4. This doesn't offer a functional proposal, but it makes some important points about the situation and offers an interesting reframe, and I hope it gets built upon. Key paragraph:
In other words: from a libertarian perspective, it makes really quite a lot of sense (without compromising your libertarian ideals even one iota) to look at the AI developers and say "fucking stop (you are taking far too much risk with everyone else's lives; this is a form of theft until and unless you can pay all the people whose lives you're risking, enough to offset the risk)".
I see. I agree it makes the strength of discourse here weaker, and agree that the people blocked were specifically people who have disagreements about the standards to aspire to in large group discourse. I am grateful that at least one of the people has engaged well elsewhere, and have written a review encouraging people to positively vote on that post (I gave it +4). While I do think it's likely some valid criticisms of content within the posts have been missed as a result of such silencing effects under Duncan's posts, I feel confident enough that there's a lot of valuable content that I still think it deserves to score highly in the review.
I gave my strongest hypothesis for why it looks to me that many many people believe it's responsible to take down information that makes your org look bad. I don't think alternative stories have negligible probability, nor does what I wrote imply that, though it is logically consistent with that.
There are many anti-informative behaviors that are widespread for which people do for poor reasons, like saying that their spouse is the best spouse in the world, or telling customers that their business is the best business in the industry, or saying exclusively glowing things about people in reference letters, that should obviously be explained by the incentives on the person to present themselves in the best light; at the same time, it is respectful to a person, while in dialogue with them, to keep a track of the version of them who is trying their best to have true beliefs and honestly inform others around them, in order to help them become that person (and notice the delta between their current behavior and what they hopefully aspire to).
Seeing orgs in the self-identified-EA space take down information that makes them look bad is (to me) not that dissimilar to the other things I listed.
I think it's good to discuss norms about how appropriate it is to bring up cynical hypotheses about someone during a discussion in which they're present. In this case I think raising this hypothesis was worthwhile it for the discussion, and I didn't cut off any way for the person in question to continue to show themselves to be broadly acting in good faith, so I think it went fine. Li replied to Habryka, and left a thoughtful pair of comments retracting and apologizing, which reflected well on them in my eyes.