LESSWRONG
LW

210
Liron
4546597880
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
7Liron's Shortform
5y
4
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron3d20

Thanks. The reactions to such a post would constitute a stronger common knowledge signal of community agreement with the book (to the degree that such agreement is in fact present in the community).

I wonder if it would be better to make the agree-voting anonymous (like LW post voting) or with people's names attached to their votes (like react-voting).

I'm sure this is going too far for you, but I also personally wish LW could go even further toward turning a sufficient amount of mutual support expressed in that form (if it turns out to exist) into a frontpage that actually looks like what most humans expect a supportive front page around a big event to look like (moreso than having a banner mentioning it and discussion mentioning it).

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron4d20

> nor is my argument even "mutual knowledge is bad".

For example, I really like the LessWrong surveys! I take those every year!

 

What's the minimally modified version of posting this "Statement of Support for IABIED" you'd feel good about? Presumably the upper bound for your desired level of modification would be if we included a yearly survey question about whether people agree with the quoted central claim from the book?

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron4d20

Again, the separate tweet about LW crab-bucketing in my Twitter thread wasn't meant as a response to to you in this LW thread.

I agree that "room for disagreement does not imply any disagreement is valid", and am not seeing anything left to respond to on that point.

Reply11
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron4d30

Ah yeah that'd probably be better

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron5d20

What's the issue with my Twitter post? It just says I see your comment as representative of many LWers, and the same thing I said in my previous reply, that aggregating people's belief-states into mutual knowledge is actually part of "thinking" rather than "fighting".

I find the criticism for my quality of engagement in this thread distasteful, as I've provided substantive object-level engagement with each of your comments so far. I could equally criticize you for bringing up multiple sub-points per post that leave me no way to respond in a time-efficient way without being called "minimal", but I won't, because I don't see either of our behaviors so far as breaking out of the boundaries of productive LessWrong discourse. My claim about this community's "crab-bucketing" was a separate tweet not intended as a reply to you.

I have argued both that your argument for why "The goal of LessWrong [...] is to lead the world on having correct opinions about important topics" is false

Ok, I'll pick this sub-argument to expand on. You correctly point out that what I wrote does not text-match the "What LessWrong is about" section. My argument would be that this cited quote:

[Aspiring] rationalists should win [at life, their goals, etc]. You know a rationalist because they're sitting atop a pile of utility. – Rationality is systematized winning

As well as Eliezer's post about "Something to protect" - imply that a community that practices rationality ought to somehow optimize the causal connection between their practice of rationality and the impact that it has.

This obviously leaves room for people to have disagreeing interpretations of what LessWrong ought to do, as you and I currently do.

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron6d00

I'm happy to agree on the crux that if one accepts “the only people who care what LessWrongers have to say are other LessWrongers” (which I currently don't), then that would weaken the case for mutual knowledge — I would say by about half. The other half of my claim is that building mutual knowledge benefits other LessWrongers.

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron6d00

The only people who care what LessWrongers have to say are other LessWrongers!

 

I disagree with that premise. The goal of LessWrong, as I understand it, is to lead the world on having correct opinions about important topics. I would never assume away the possibility of that goal.

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron8d10

I'm open to other shorthands.

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron8d50

If you don't know whether the book's thesis being “all-too-plausibly” or “not all-too-plausibly” right describes your position better, you can just go ahead and not count yourself as a supporter of IABIED (or flip a coin, or don't participate in the aggregation effort). The mutual-knowledge I'm hoping to build is among people who don't see this as a gray-area question, because I think that's already a pretty high fraction (maybe majority) of LWers.

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron9d43

Oh ok, but I'm just using it as a shorthand for the longer statement I wrote write under it, which I guess you agree is clear.

Reply
Load More
65Statement of Support for "If Anyone Builds It, Everyone Dies"
9d
34
91Interview with Eliezer Yudkowsky on Rationality and Systematic Misunderstanding of AI Alignment
17d
21
46Interview with Steven Byrnes on Brain-like AGI, Foom & Doom, and Solving Technical Alignment
2mo
1
29Interview with Carl Feynman on Imminent AI Existential Risk
3mo
1
30Jim Babcock's Mainline Doom Scenario: Human-Level AI Can't Control Its Successor
5mo
4
43Practicing Bayesian Epistemology with "Two Boys" Probability Puzzles
9mo
14
5Is P(Doom) Meaningful? Bayesian vs. Popperian Epistemology Debate
11mo
1
46Robin Hanson AI X-Risk Debate — Highlights and Analysis
1y
7
41Robin Hanson & Liron Shapira Debate AI X-Risk
1y
4
9Pausing AI is Positive Expected Value
2y
2
Load More