timtyler comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong

-3 Post author: jacob_cannell 03 September 2010 07:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 05 September 2010 03:17:07AM *  1 point [-]

And how do the gatekeepers morally justify the astronomical number of simulated lives that become ruthlessly terminated [...]

We run genetic algorithms where we too squish creatures without giving the matter much thought. Perhaps like that - at least in the Optimisationverse scenario.

Comment author: Baughn 09 September 2010 04:45:16PM 0 points [-]

If my simulations had even the complexity of a bacteria, I'd give it a whole lot more thought.

Doesn't mean these simulators would, but I don't think your logic works.

Comment author: timtyler 09 September 2010 07:55:25PM 0 points [-]

Generalising from what you would do to what all possible intelligent simulator constructors might do seems as though it would be a rather dubious step. There are plenty of ways they might justify this.

Comment author: Baughn 10 September 2010 10:32:57AM *  0 points [-]

Right. For some reason I thought you were using universal quantification, which of course you aren't. Never mind; the "perhaps" fixes it.