jacob_cannell comments on Steelmaning AI risk critiques - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (98)
Irrelevant. The design space of all programs is infinite - do you somehow think that the set of programs that humans create is a random sample from the set of all programs?. The size of the design space has absolutely nothing whatsoever to do with any realistic actual probability distribution over that space.
Of course we can - because UFAI is defined as superintelligence that doesn't care about humans!
For a certain narrow sense of "care", yes -- but it's a sense narrow enough that it doesn't exclude a motivation to sim humans, or give us any good grounds for probabilistic reasoning about whether a Friendly intelligence is more likely to simulate us. So narrow, in fact, that it's not actually a very strong assumption, if by strength we mean something like bits of specification.
Most UFAI will have convergent instrumental reasons to sim at least some humans, just as a component of simulating the universal in general towards better prediction/understanding.
FAI has that same small motivation plus the more direct end goal of creating enormous numbers of sims to satisfy human's highly convergent desire for an afterlife to exist. The creation of an immortal afterlife is the single most important defining characteristic of FAI. Humans have spent a huge amount of time thinking and debating about what kinds of gods should/could exist, and afterlife/immortality is the number one concern - and transhumanists are certainly no exception.