TheOtherDave comments on Wrong Questions - Less Wrong

34 Post author: Eliezer_Yudkowsky 08 March 2008 05:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 03 November 2010 09:42:21PM 12 points [-]

more knowledgeable people must be less free.

Larry Niven plays with this idea in Protector... the idea being that if you're really smart, the right solution presents itself so rapidly that you simply don't have any choices.

I suspect this is nonsense in any practical sense. Sure, any increase in intelligence will force you to close off some options which you now realize are bogus, but it will likely also make you aware of options you weren't previously able to recognize.

In my own experience, increased understanding leads to a net gain of options. Perhaps the curve is hyperbolic, but if so I live on the ascending slope.

Comment author: pnrjulius 30 April 2012 02:59:47PM 4 points [-]

Do you feel any less free because it never occurs to you to bash your head against a wall, or slit your throat with a steak knife?

I certainly don't; it would be a terrible inconvenience to have to go through all the really stupid options of things I could do at any given moment before arriving at the reasonable ones.

How much more so, then, for a superintelligence; it does not have to wonder about the stupid questions we humans often ask, but instead can focus on the really interesting decisions that remain to be made. (If you imagine that the space of possible decisions is finite, perhaps it could run out eventually... but my sense is that no intelligence small enough to fit in our universe can run out of possible decisions in our universe.)

Comment author: TheOtherDave 30 April 2012 03:13:11PM 0 points [-]

It does occasionally occur to me to kill myself, and in my really bad periods I do experience myself as prevented from choosing an eminently desirable path by my own earlier precommitments. But that's neither here nor there.

Leaving the particulars aside... if there exists some question Q such that intelligence I1 finds Q difficult to answer and I2 finds Q easy to answer because I2 is a superintelligence with respect to I1, then I2 may well at some point consider Q, answer Q, and then move on to the next thing. Or, of course, it might never do so, depending on the relevance of Q to anything that occurs to I2.... as you say, the space of possible decisions is enormous.

I fail to see what follows from this. Can you unpack your thinking a bit, here?