Warrigal comments on Open Thread: March 2010, part 3 - Less Wrong

3 Post author: RobinZ 19 March 2010 03:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (254)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 27 March 2010 06:40:22AM 0 points [-]

So, while in the shower, an idea for an FAI came into my head.

My intuition tells me that if we manage to entirely formalize correct reasoning, the result will have a sort of adversarial quality: you can "prove" statements, but these proofs can be overturned by stronger disproofs. So, I figured that if you simply told two (or more) AGIs to fight over one database of information, the most rational AGI would be able to set the database to contain the correct information. (Another intuition of mine tells me that FAI is a problem of rationality: once you have a rational AGI, you can just feed it CEV or whatever.)

Of course, for this to work, two things would have to happen: one of the AGIs would have to be intelligent enough to discover the rational conclusions, and no AGI could be so much smarter than the others that it could find tons of evidence in favor of its pet truths and have the database favor them despite that they're false.

So, I don't think this will work very well. At least I came to it by despairing about how not everybody has an infinite amount of money and yet values it anyway, thereby making our economic system perfect!

Comment author: [deleted] 27 March 2010 06:52:46AM 0 points [-]

I seem to have man-with-a-hammer syndrome, and my hammer is economics. Luckily, I'm using economics as a tool for designing stuff, not for understanding stuff; there is no One True Design the way there's a One True Truth.