Mass_Driver comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong

-3 Post author: jacob_cannell 03 September 2010 07:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 03 September 2010 11:33:24PM *  9 points [-]

I'm not too concerned about the karma - more the lack of interesting replies and general unjustified holier-than-though attitude. This idea is different than "that alien message" and I didn't find a discussion of this on LW (not that it doesn't exist - I just didn't find it).

  1. This is not my first post.
  2. I posted this after I brought up the idea in a comment which at least one person found interesting.
  3. I have spent significant time reading LW and associated writings before I ever created an account.
  4. I've certainly read the AI-in-a-box posts, and the posts theorizing about the nature of smarter-than-human-intelligence. I also previously read "that alien message", and since this is similar I should have linked to it.
  5. I have a knowledge background that leads to somewhat different conclusions about A. the nature of intelligence itself, B. what 'smarter' even means, etc etc
  6. Different backgrounds, different assumptions, so I listed my background and starting assumptions as they somewhat differ than the LW norm

Back to 3:

Remember, the whole plot device of "that alien message" revolved around a large and obvious grand reveal by the humans. If information can only flow into the sim world once (during construction), and then ever after can only flow out of the sim world, that plot device doesn't work.

Trying to keep an AI boxed up where the AI knows that you exist is a fundamentally different problem than a box where the AI doesn't even know you exist, doesn't even know it is in a box, and may provably not even have enough information to know for certain whether it is in a box.

For example, I think the simulation argument holds water (we are probably in a sim), but I don't believe there is enough information in our universe for us to discover much of anything about the nature of a hypothetical outside universe.

This of course doesn't prove that my weak or strong Mind Prison conjectures are correct, but it at least reduces the problem down to "can we build a universe sim as good as this?"

Comment author: Mass_Driver 04 September 2010 07:03:26AM 0 points [-]

I wish I could vote up this comment more than once.

Comment author: jacob_cannell 04 September 2010 06:00:21PM 1 point [-]

Thanks. :)