All of rahulxyz's Comments + Replies

There doesn't seem to be many surveys of the general population on doom type scenarios. Most of them seem to be based on bias/weapons type scenario. You could look at something like metaculus but I don't think that's representative of the general population.

Here's a breakdown of AI researchers: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/ (median /mean of extinction is 5%/14%)

US Public: https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/general-attitudes-toward-ai.html (12% of americans think it will be "extremely bad i.e extin... (read more)

Which is funny because there is at least one situation where robin reasons from first principles instead of taking the outside view (cryonics comes to mind). I'm not sure why he really doesn't want to go through the arguments from first principles for AGI. 

GPT-2 does not - probably, very probably, but of course nobody on Earth knows what's actually going on in there - does not in itself do something that amounts to checking possible pathways through time/events/causality/environment to end up in a preferred destination class despite variation in where it starts out.

A blender may be very good at blending apples, that doesn't mean it has a goal of blending apples.

A blender that spit out oranges as unsatisfactory, pushed itself off the kitchen counter, stuck wires into electrical sockets in order to burn open y

... (read more)
1quetzal_rainbow
Making hyperbole: very good random number generator sometimes can output numbers corresponding to some consequentialist plans, but it's not very useful as consequentialist. Lowering level of hyperbole: LLMs trained to a superhuman level can produce consequentialist plans, but it can also produce many non-consequentialist useless plans. If you want it to reliably make good plans (better than human), you should apply some optimization pressure, like RLHF.
1robm
There's a difference between "what would you do to blend apples" and "what would you do to unbox an AGI". It's not clear to me if it is just a difference of degree, or something deeper.

Small world, I guess :) I knew I heard this type of argument before, but I couldn't remember the name of it.

So it seems like the grabby aliens model contradicts the doomsday argument unless one of these is true:

  • We live in a "grabby" universe, but one with few or no sentient beings long-term?
  • The reference classes for the 2 arguments are somehow different (like discussed above)

Thanks for the great writeup (and the video). I think I finally understand the gist of the argument now.

The argument seems to raise another interesting question about the grabby aliens part. 

He's using the hypothesis of grabby aliens to explain away the model's low probability of us appearing early (and I presume we're one of these grabby aliens). But this leads to a similar problem: Robin Hanson (or anyone reading this) has a very low probability of appearing this early amongst all the humans to ever exist.

This low probability would also require a si... (read more)

3Writer
You've rediscovered the doomsday argument! Fun fact: According to Wikipedia, this argument was first formally proposed by Brandon Carter, the author of the hard-steps model. He has also given name to the anthropic principle. Edit: note that us not becoming grabby doesn't contradict the model. There's a chance that we will not. Plus, the model tells us that hearing alien messages or discovering alien ruins would be terrible news in that regard. I'll explain the reason in the next part.