LESSWRONG
LW

1458
Liron
4639597910
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
I ate bear fat with honey and salt flakes, to prove a point
Liron7h44

The fact still stands that ice cream is what we mass produce and send to grocery stores.

Yeah, I guess this exact observation is critical to making Eliezer's analogy accurate.

IMO "predicting that bear fat with honey and salt tastes good" is analogous to "predicting that harnessing a star's power will be an optimization target" — something we probably can successfully do.

And "predicting bear fat (or some kind of rendered animal fat) with honey and salt will be a popular treat" - the thing we couldn't have done a-priori - is analogous to "predicting solar-to-electricity generator panels will be a popular fixture on many planets" (since the details probably will turn out to have some unpredictable twists), and also to "predicting that making humans satisfied with outcomes will be an optimization target for AIs in the production environment as a result of their training".

I think this analogy is probably right, but the sense in which it's right seems sufficiently non-obvious/detailed/finicky that I don't think we can expect most people to get it?

Plus IMO it further undermines the pedagogical value of this example to observe that a drinkable form of ice cream (shakes) is also popular, plus there's gelato / frozen yogurt / soft serve, and then thick sweet yogurts and popsicles... it's a pretty continuous treat-fitness landscape.

I do think Eliezer is importantly right that the exact peak market-winning point in this landscape, would be hard to predict a-priori. But is the hardness also explained by the peak being dependent on chaotic historical/cultural forces?

And that's why I personally don't bring up the bear fat thing in my AI danger explanations.

Reply
Reasons against donating to Lightcone Infrastructure
Liron2d6653

Seems like the rapid-fire nature of an InkHaven writing sprint is a poor fit for a public post under a personally-charged summary bullet like “Oliver puts personal conflict ahead of shared goals”.

High-quality discourse means making an effort to give people the benefit of the doubt when making claims about their character. It’s worth taking time to carefully follow our rationalist norms of epistemic rigor, productive discourse, and personal charity.

I’d expect a high-evidence post about a very non-consensus topic like this to start out in a more norm-calibrated and self-aware epistemic tone, e.g. “I have concerns about Oliver’s decisionmaking as leader of Lightcone based on a pattern of incidents I’ve witnessed in his personal conflicts (detailed below)”.

Reply2
Reasons against donating to Lightcone Infrastructure
Liron2d11-6

Maybe Lightcone Infrastructure can just allow earmarking donations for LessWrong, if enough people care about that criticism.

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron1mo20

Thanks. The reactions to such a post would constitute a stronger common knowledge signal of community agreement with the book (to the degree that such agreement is in fact present in the community).

I wonder if it would be better to make the agree-voting anonymous (like LW post voting) or with people's names attached to their votes (like react-voting).

I'm sure this is going too far for you, but I also personally wish LW could go even further toward turning a sufficient amount of mutual support expressed in that form (if it turns out to exist) into a frontpage that actually looks like what most humans expect a supportive front page around a big event to look like (moreso than having a banner mentioning it and discussion mentioning it).

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron1mo20

> nor is my argument even "mutual knowledge is bad".

For example, I really like the LessWrong surveys! I take those every year!

 

What's the minimally modified version of posting this "Statement of Support for IABIED" you'd feel good about? Presumably the upper bound for your desired level of modification would be if we included a yearly survey question about whether people agree with the quoted central claim from the book?

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron1mo20

Again, the separate tweet about LW crab-bucketing in my Twitter thread wasn't meant as a response to to you in this LW thread.

I agree that "room for disagreement does not imply any disagreement is valid", and am not seeing anything left to respond to on that point.

Reply11
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron1mo30

Ah yeah that'd probably be better

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron1mo20

What's the issue with my Twitter post? It just says I see your comment as representative of many LWers, and the same thing I said in my previous reply, that aggregating people's belief-states into mutual knowledge is actually part of "thinking" rather than "fighting".

I find the criticism for my quality of engagement in this thread distasteful, as I've provided substantive object-level engagement with each of your comments so far. I could equally criticize you for bringing up multiple sub-points per post that leave me no way to respond in a time-efficient way without being called "minimal", but I won't, because I don't see either of our behaviors so far as breaking out of the boundaries of productive LessWrong discourse. My claim about this community's "crab-bucketing" was a separate tweet not intended as a reply to you.

I have argued both that your argument for why "The goal of LessWrong [...] is to lead the world on having correct opinions about important topics" is false

Ok, I'll pick this sub-argument to expand on. You correctly point out that what I wrote does not text-match the "What LessWrong is about" section. My argument would be that this cited quote:

[Aspiring] rationalists should win [at life, their goals, etc]. You know a rationalist because they're sitting atop a pile of utility. – Rationality is systematized winning

As well as Eliezer's post about "Something to protect" - imply that a community that practices rationality ought to somehow optimize the causal connection between their practice of rationality and the impact that it has.

This obviously leaves room for people to have disagreeing interpretations of what LessWrong ought to do, as you and I currently do.

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron1mo00

I'm happy to agree on the crux that if one accepts “the only people who care what LessWrongers have to say are other LessWrongers” (which I currently don't), then that would weaken the case for mutual knowledge — I would say by about half. The other half of my claim is that building mutual knowledge benefits other LessWrongers.

Reply
Statement of Support for "If Anyone Builds It, Everyone Dies"
Liron1mo00

The only people who care what LessWrongers have to say are other LessWrongers!

 

I disagree with that premise. The goal of LessWrong, as I understand it, is to lead the world on having correct opinions about important topics. I would never assume away the possibility of that goal.

Reply
Load More
7Liron's Shortform
5y
4
71Statement of Support for "If Anyone Builds It, Everyone Dies"
1mo
34
93Interview with Eliezer Yudkowsky on Rationality and Systematic Misunderstanding of AI Alignment
2mo
21
48Interview with Steven Byrnes on Brain-like AGI, Foom & Doom, and Solving Technical Alignment
3mo
1
31Interview with Carl Feynman on Imminent AI Existential Risk
4mo
2
30Jim Babcock's Mainline Doom Scenario: Human-Level AI Can't Control Its Successor
6mo
4
43Practicing Bayesian Epistemology with "Two Boys" Probability Puzzles
10mo
14
5Is P(Doom) Meaningful? Bayesian vs. Popperian Epistemology Debate
1y
1
46Robin Hanson AI X-Risk Debate — Highlights and Analysis
1y
7
41Robin Hanson & Liron Shapira Debate AI X-Risk
1y
4
9Pausing AI is Positive Expected Value
2y
2
Load More