Will_Pearson comments on Trying to Try - Less Wrong

42 Post author: Eliezer_Yudkowsky 01 October 2008 08:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Will_Pearson 02 October 2008 02:49:14PM 0 points [-]

If you can answer 'yes' to every "is x possible?" question about the problem, like

Is intelligence possible? Yes. (I am a mind.) Can it be instantiated in a machine? Yes. (Minds are machines.) Is looking at your own mind's code, understanding it, and improving it possible? Yes. (I can understand code, but, alas, my brain is not available for me to hack. A mind made of code doesn't have this limitation.)

you can say "What's the use of trying? It's but a matter of doing it. I will simply do it. I will begin now. I will stop when I'm done." When you know that success is not forbidden by the laws of physics, trying ends and doing begins.

Right now I am doing and at one point in time I will say: "It worked." The only thing that is uncertain is when.

There are questions I can't answer about the problem.

Does human-level intelligence require some sort of changing of the source code in itself, experimentally at a local level? Neurons have no smarts in them implicitly, we share the same type of neurons with babies. What makes us smart is how they are connected. Which changes on a daily basis, if not at shorter time scales. Is it possible to alter this kind of computer system from the outside, to make it "better" if it is changing itself. If you freeze a copy of your software brain, you will change during the time you investigate your own smarts, and any changes you then apply back to you may be incompatible or non-optimal with the changes your brain made to itself.

In short I think is plausible that there are computer systems I cannot understand and improve the software of on a high level rational level. And that my own mind might be one of them.