loqi comments on How Not to be Stupid: Adorable Maybes - Less Wrong

-2 Post author: Psy-Kosh 29 April 2009 07:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 30 April 2009 07:51:58AM *  2 points [-]

I agree with you that the math is right. Given assumptions, it acts as promised. But the assumptions just aren't a good model of reality. Like naive game theory: you can go with the mathematically justified option of Always Defect, or you can go with common sense. Reality doesn't contain preference rankings over all possible situations; shoehorning reality into preference rankings might hurt you. Hasn't this point clicked yet? I'll try again.

The sum total of those other things and chess together is a very different goal system than "chess and nothing else".

Human beings aren't goal systems. We DON'T SUM, anymore than a car "sums" the value of its speedometer with the value of the fuel gauge. If we actually summed, you'd get the outcome Eliezer once advocated: every one of us "picking one charity and donating as much to it as he can". Your superintelligent chess player with the "correct" utility function won't ever play chess while there are other util-rich tasks anywhere in the world, like hunger in Africa.

That doesn't mean decision theory is inherently flawed. It means, well, fully specifying what we actually want is a highly nontrivial problem.

We shouldn't need to fully specify what we actually want, if we're building a specialized machine to e.g. cure world hunger or design better integrated circuits. It would be better to build such machines based on a theory that typically results in localized screw-ups... rather than a theory that destroys the world by default, unless you tell it everything about you.

Comment author: loqi 01 May 2009 02:21:29AM 1 point [-]

We shouldn't need to fully specify what we actually want, if we're building a specialized machine to e.g. cure world hunger or design better integrated circuits.

What if we're building a specialized machine to prevent a superintelligence from annihilating us?