Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
tup992-1

How many things could reasonably have a p(doom) > 0.01? Not very many.  Therefore your worry about me "neurotically obsessing over tons of things" is unfounded. I promise I won't :) If my post causes you to think that, then I apologize, I have misspoken my argument.

tup9988

A lot of your responses make you sound like you're more interested in arguing and being contrarian than in seeking the truth with us. This one exemplifies it, but it's a general pattern of the tone of your responses. It'd be nice if you came across as more truth-seeking than argument-seeking.

tup990-3

Well of course there is something different: The p(doom), as based on the opinions of a lot of people who I consider to be smart. That strongly distinguishes it from just about every other concept.

tup991-3

This was the most compelling part of their post for me:

"You are correct about the arguments for doom being either incomplete or bad. But the arguments for survival are equally incomplete and bad."

And you really don't seem to have taken it to heart. You're demanding that doomers provide you with a good argument.  Well, I demand that you provide me with a good argument! 

More seriously: we need to weigh the doom-evidence and the non-doom-evidence against each other. But you believe that we need to look at the doom-evidence and if it's not very good, then p(doom) should be low. But that's wrong -- you don't acknowledge that the non-doom-evidence is also not very good. IOW there's a ton of uncertainty.

If you give me a list of "100 things make me nervous", I can just as easily give you "a list of 100 things that make me optimistic".

Then it would be a lot more logical for your p(doom) to be 0.5 rather than 0.02-0.2!

tup990-1

I (on average) expect to be treated about as well by our new AGI overlords as I am treated by the current batch of rulers.

...

By doom I mean the universe gets populated by AI with no moral worth (e.g. paperclippers).  

 

Well, at least we've unearthed the reasons that your p(doom) differs!

Most people do not expect #1(unless we solve alignment), and have a broader definition of #2. I certainly do.

tup9956

Agreed. Let's not lose sight of the fact that 2-20% means it's still the most important thing in the world, in my view.

tup9910

I feel like it would be beneficial to add another sentence or two to the “goal” section, because I’m not at all convinced that we want this. As someone new to this topic, my emotional reaction to reading this list is terror.

Any of these techniques would surely be available to only a small fraction of the world’s population. And I feel like that would almost certainly result in a much worse world than today, for many of the same reasons as AGI. It will greatly increase the distance between the haves and the have-nots. (I get the same feeling reading this as I get from reading e/acc stuff — it feels like the author is only thinking about the good outcomes.)

Your answer would be that (1) AGI will be far more catastrophic, and (2) this is the only way to avoid an AGI catastrophe. Personally I’m not convinced. And even if I was, it would be really emotionally difficult to devote my resources to making the world much worse (even to save it from something even worse than that). So overall, I’d much rather bet on something else that will not *itself* make the world a much worse place.

Relatedly: Does your “value drift” column include the potential value drift of simply being much, much smarter than the rest of humanity (the have-nots)? Anecdotally, I think there’s somewhat of an inverse correlation between intelligence and empathy in humans. I’m not as worried about it as I am with AGI, but I’m much more worried than your column suggests. Imagine a super-intelligent Sam Altman.

And tangentially related: We actually have no idea if we can make this superintelligent baby “sane”. What you mean is that we can protect it from known genetic mental health problems, sure, but that’s not the whole picture. Superintelligence will probably affect a person’s personality/values in ways we can’t predict. It could cause depression, psychopathic behavior, who knows.