I feel much less ok about ppl dissing OAI on their own blog posts on LW. I assume that if they knew ahead of time, they would have been much less likely to participate.
I find it hard to parse this second sentence. If who knew what ahead of time, they would be less likely to participate?
I think this means "I assume if OpenAI staff expected that users would writing insulting things in the comments, then they may not have participated at all".
Gabriel makes a very good point: there is something of a tension between allowing reign of terror moderation and considering it a norm violation to request the deletion of comments for low quality.
(TBH, I was convinced that reign of terror would be a disaster, but it seems to be working out okay so far).
(Context for the reader: Gabriel reached out to me a bit more than a year ago to ask me to delete a few comments on this post by Jacob Hilton, who was working at OpenAI at the time. I referenced this in my recent dialogue with Olivia, where I quoted an email I sent to Eliezer about having some concerns about Conjecture partially on the basis of that interaction. We ended up scheduling a dialogue to talk about that and related stuff.)
Gabriel's principles for moderating spaces
LessWrong as a post-publication peer-reviewed journal
What are "insults"?
"Epistemic range of motion" as an important LW principle