Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
tup9920

I (on average) expect to be treated about as well by our new AGI overlords as I am treated by the current batch of rulers.

...

By doom I mean the universe gets populated by AI with no moral worth (e.g. paperclippers).  

 

Well, at least we've unearthed the reasons that your p(doom) differs!

Most people do not expect #1(unless we solve alignment), and have a broader definition of #2. I certainly do.

tup9954

Agreed. Let's not lose sight of the fact that 2-20% means it's still the most important thing in the world, in my view.

tup9910

I feel like it would be beneficial to add another sentence or two to the “goal” section, because I’m not at all convinced that we want this. As someone new to this topic, my emotional reaction to reading this list is terror.

Any of these techniques would surely be available to only a small fraction of the world’s population. And I feel like that would almost certainly result in a much worse world than today, for many of the same reasons as AGI. It will greatly increase the distance between the haves and the have-nots. (I get the same feeling reading this as I get from reading e/acc stuff — it feels like the author is only thinking about the good outcomes.)

Your answer would be that (1) AGI will be far more catastrophic, and (2) this is the only way to avoid an AGI catastrophe. Personally I’m not convinced. And even if I was, it would be really emotionally difficult to devote my resources to making the world much worse (even to save it from something even worse than that). So overall, I’d much rather bet on something else that will not *itself* make the world a much worse place.

Relatedly: Does your “value drift” column include the potential value drift of simply being much, much smarter than the rest of humanity (the have-nots)? Anecdotally, I think there’s somewhat of an inverse correlation between intelligence and empathy in humans. I’m not as worried about it as I am with AGI, but I’m much more worried than your column suggests. Imagine a super-intelligent Sam Altman.

And tangentially related: We actually have no idea if we can make this superintelligent baby “sane”. What you mean is that we can protect it from known genetic mental health problems, sure, but that’s not the whole picture. Superintelligence will probably affect a person’s personality/values in ways we can’t predict. It could cause depression, psychopathic behavior, who knows.