The issue is that I have no idea where you're getting that hypothesis from. What have I written, anywhere, that makes you think I would disapprove of Alexei's comment?
The seventh guideline doesn't say that you shouldn't hypothesize about what other people believe
In accordance with the Eighth Guideline, I would like to revise the wording of my invocation of the Seventh Guideline in the grandparent: given our history of communication failures, I think your comments would be better if you try to avoid posing hypotheses (not "making claims") about what I believe in the absence of direct textual evidence, in accordance with the Seventh Guideline.
(But again, that's just my opinion about how I think you could write better comments; I don't consider it a "request.")
I'd be interested in a statement of what Zack-guideline the above "here's what I think he believes?" falls afoul of.
I still think your Seventh Guideline applies as written. All three of your examples of "ways a Seventh Guideline request might look" seem appropriate to me with some small adaptations for context (notwithstanding that I don't believe in "requests").
You wrote:
"wow, I support this way less than I otherwise would have, because your (hypothesized) straightforward diagnosis of what was going on in a large conflict over norms seems to me to be kind of petty" is contra both my norms and my understanding of Zack's preferred norms; unless I miss him entirely neither one of us wants LessWrong to be the kind of place where that sort of factor weighs very heavily in people's analysis.
The first example of a way a Seventh Guideline request might look says,
That's not what I wrote, though. Can you please engage with what I wrote?
I can't quite ask you to engage with what I wrote, because your hypothesis that I don't "want LessWrong to be the kind of place where that sort of factor weighs very heavily in people's analysis" bears no obvious resemblance to anything I've written, so it's not clear what part of my writing I should be directing you to read more carefully.
In fact, I don't even read the pettiness judgement as having weighed very heavily in Alexei's analysis! Alexei wrote, "Overall strong upvote from me, but I'm not doing it because [...]". I interpret this as saying that the pettiness of that section was enough of a detractor from the value of the post that he didn't feel like awarding a strong-upvote, which I regard as distinct from weighing heavily in his analysis of the contents of the rest of the post themselves (as contrasted to his analysis of whether to strong-upvote). If it looks like Dante was motivated to write The Inferno in order to have a short section at the end depicting his enemies suffering divine punishment, that's definitely something Dante scholars should be allowed to notice and criticize, without that weighing heavily into their analysis of the preceding 4000 lines: there's a lot of stuff in those 4000 lines to be analyzed, separately from the fact that it's all building up to the enemy torture scene. (I'm doing a decent amount of interpretation here; if Alexei happens to make the poor time-allocation decision of reading this subthread, he is encouraged to invoke the Seventh Guideline against this paragraph.)
The second example of a way a Seventh Guideline request might look says,
Er, you seem to be putting a lot of words in my mouth.
I think this applies? (A previous revision of this comment said "This applies straightforwardly", but maybe you think the "my understanding"/"unless I miss him" disclaimers exclude the possibility of "putting words in someone's mouth"?)
The third and final example of a way a Seventh Guideline request might look says,
I feel like I'm being asked to defend a position I haven't taken. Can you point at what I said that made you think I think X?
"Asked to defend" doesn't apply, but the question does. Can you point at what I said that made you think that I think that Alexei's comment weighs the pettiness judgement very heavily in his analysis and that I don't want Less Wrong to be the kind of place?
After being prompted by this thread and thinking for a minute, I was able to come up with a reason I should arguably disapprove of Alexei's comment: that pettiness is not intellectually substantive (the section is correct or not separately from whether it's a petty thing to point out) and letting a pettiness assessment flip the decision of whether to upvote makes karma scores less useful. I don't feel that strongly about this and wouldn't have come up with it without prompting because I'm not a karma-grubber: I think it's, well, petty to complain about someone's reasons for downvoting or witholding an upvote.
Can you name any way to solve [chess but with rooks and bishops not being able to move more than four squares at a time] without RL (or something functionally equivalent to RL)?
This isn't even hard. Just take a pre-2017 chess engine, and edit the rules code so that rooks and bishops can only move four spaces. You're probably already done: the core minimax search still works, α–β pruning still works, quiescence still works, &c. To be fair, the heuristic evaluation function won't be correct, but you could just ... make bishops and rooks be respectively worth 2.5 and 3.5 points instead of the traditional 3 and 5? Even if my guess at those point values is wrong, that should still be easily superhuman with 2017 algorithms on 2017 hardware. (Stockfish didn't incorporate neural networks until 2020.)
my understanding of Zack's preferred norms; unless I miss him entirely neither one of us wants LessWrong to be the kind of place where that sort of factor weighs very heavily in people's analysis.
Um, I strong-upvoted and strong-agreement-voted Alexei's comment.
Given our history of communication failures, I think your comments would be better if you try to avoid making claims about what I believe in the absence of direct textual evidence, in accordance with the Seventh Guideline.
But, crucially, that's me saying I think your comments would be better comments if you did that. I'm not saying you shouldn't try to extrapolate my views if you want to. You don't owe me anything!
Zack's [...] request
Clarification: I didn't think of that as a "request." I was saying that according to my standards, I would be embarrassed to publish criticism of someone that didn't quote or link to their writings, and that it seemed to me to be in tension with your condemnations of strawmanning.
I don't think of that as a request that you change it, because in general, I don't think I have "jurisdiction" over other people's writing. If someone says something I think is wrong, my response is to write my own comment or post explaining why I think it's wrong (or perhaps mention it in person at Less Online), which they can respond or not-respond to as they see fit. You don't owe me anything!
The Rick and Morty analysis in Act III, Scene I is great. I guess the "To be fair, you have to have a very high IQ [...]" meme is for real!
It's good news for learning, not necessarily good news for jobs. If you care about creating "teaching" make-work jobs, but don't care whether people know things, then it's bad news.
Specifically, the idea is that AI going well for humans would require a detailed theory of how to encode human values in form suitable for machine optimization, and the relevance of deep learning is that Yudkowsky and Soares think that deep learning is on track to provide the superhuman optimization without the theory of values. You're correct to note that this is a stance according to which "artificial life is by default bad, dangerous, or disvaluable," but I think the way you contrast it with the claim that "biological life is by default good or preferable" is getting the nuances slightly wrong: independently-evolved biological aliens with superior intelligence would also be dangerous for broadly similar reasons.
I preordered my copy.
Something about the tone of this announcement feels very wrong, though. You cite Rob Bensinger and other MIRI staff being impressed. But obviously, those people are highly selected for already agreeing with you! How much did you engage with skeptical and informed prereaders? (I'm imagining people in the x-risk-reduction social network who are knowledgeable about AI, acknowledge the obvious bare-bones case for extinction risk, but aren't sold on the literal stated-with-certainty headline claim, "If anyone builds it, everyone dies.")
If you haven't already done so, is there still time to solicit feedback from such people and revise the text? (Sorry if the question sounds condescending, but the tone of the announcement really worries me. It would be insane not to commission red team prereaders, but if you did, then the announcement should be talking about the red team's reaction, not Rob's!)
Low-IQ voters can't identify good policies or wise politicians; democracy favors political actors who can successfully propagandize and mobilize the largest number of people, which might not correspond to good governance. A political system with non-democratic elements that offers more formalized control to actors with greater competence or better incentives might be able to choose better policies.
I say "non-democratic elements" because it doesn't have to be a strict binary between perfect democracy and perfect dictatorship. Consider, e.g., how the indirect election of U.S. Senators before the 17th Amendment was originally intended to make the Senate a more deliberative body by insulating it from the public.
(Maybe that's all wrong, but you asked "what's the model", and this is an example model of why someone might be skeptical of democracy for pro-social structural reasons rather than just personally wanting their guy to be dictator.)
Sen. Markey of Massachusetts has issued a press release condemning the proposed moratorium and expressing intent to raise a point of order: