All of j_timeberlake's Comments + Replies

What are you specifically planning to accomplish?

In a post-ASI world, the assumption that investment capital returns are honored by society is basically gone.  Like the last game in a very long series of iterated prisoner's dilemma, there's no longer a need to Cooperate.  There's still time between now and then to invest, but the generic "more long-term capital = good" mindset seems insufficient without an exit strategy or final use case.

Personally, I'm trying to balance various risks regarding the choppy years right before ASI, and also maximize charitable outcomes while I still have some agency in this world.

1Mckiev
Can you elaborate on “balancing risks regarding the choppy years”?

It's not acceptable to him, so he's trying to manipulate people into thinking existential risk is approaching 100% when it clearly isn't.  He pretends there aren't obvious reasons AI would keep us alive, and also pretends the Grabby Alien Hypothesis is fact (so people think alien intervention is basically impossible), and also pretends there aren't probably sun-sized unknown-unknowns in play here.

If it weren't so transparent, I'd appreciate that it could actually trick the world into caring more about AI-safety, but if it's so transparent that even I can see through it, then it's not going to trick anyone smart enough to matter.

5faul_sname
As a concrete note on this, Yudkowsky has a Manifold market If Artificial General Intelligence has an okay outcome, what will be the reason? So Yudkowsky is not exactly shy about expressing his opinion that outcomes in which humanity is left alive but with only crumbs on the universal scale is not acceptable to him.
RobertM128

Pascal's wager is pascal's wager, no matter what box you put it in.  You could try to rescue it by directly making the argument that we should expect a greater measure of "entities with resources that they are willing to acausally trade for things like humanity continuing to exist" compared to entities with the opposite preferences, and though I haven't seen a rigorous case for that it seems possible, but that's not sufficient; you need the expected measure of entities that have that preference to be large enough that dealing with the transaction costs/uncertainy of acausally trading at all to make sense.  And that seems like a much harder case to make.

I'm sorry, I read the tone of it ruder than it was intended.

[Rogan pivots to talking about aliens for a while, which I have no interest in and do not believe the hypothesis is worth privileging. I point you to (and endorse) the bets on this that many LessWrongers have made of up to $150k against the hypothesis. 

This reeks of soldier mindset, instead of just ignoring that part of the transcript, you felt the need to seek validation in your opposing opinion by telling us what to think in an unrelated section.  The readers can think for themselves and do not need your help to do so.

6Ben Pace
Don't think I agree with your psychological narrative (I was writing fast and felt some desire to justify why I cut a large chunk of dialogue). But I agree it's not important to include, and I've moved it to a footnote.

This is why I'm expecting an international project for safe AI.  The USA government isn't going to leave powerful AI in the hands of Altman or Google, and the rest of the world isn't going to sit idly while the USA becomes the sole AGI powerhouse.

An international project to create utopian AI is the only path I can imagine which avoids MAD.  If there's a better plan, I haven't heard it.

2Seth Herd
This describes why I want an international consortium to work on AGI. I'm afraid I don't expect it as a likely outcome. It's the sensible thing to do, but governments aren't that great at doing that, let alone working together, on a relatively short time frame. I do think this is probably what we should be arguing and advocating for. If this doesn't happen I don't think we even get a MAD standoff; with two or more parties with RSI capable AGI, it's more like a non-iterated prisoners dilemma: whoever shoots first wins it all. That's even worse. But that scenario hasn't gotten nearly enough analysis, so I'm not sure. Nonetheless,