Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Time to Exit the Sandbox

3 Post author: SquirrelInHell 24 October 2017 08:04AM

Comments (7)

Comment author: Yosarian2 24 October 2017 10:35:20PM 0 points [-]

I certanly think you're right, that the conscious mind and conscious decisions can to a large extent re-write a lot of programming of the brain.

I am surprised to think that you think that most rationalists don't think that. (That sentence is a mouthful, but you know what I mean.) A lot of rationalist writing is devoted to working on ways to do exactally that; a lot of people have written about how just reading the sequences helped them basically repogram their own brain to be more rational in a wide variety of situations.

Are there a lot of people in the rationalist community who think that conscious thought and decision making can't do major things? I know there are philosophers who think that maybe consciousness is irrelevant to behavior, but that philosophy seems very much at odds with LessWrong-style rationality and the way people on LessWrong tend ot think about and talk about what consciousness is.

Comment author: SquirrelInHell 25 October 2017 09:57:40AM 0 points [-]

Are there a lot of people in the rationalist community who think that conscious thought and decision making can't do major things?

It's not that they think it cannot do major things at all. They don't expect to do be able to do them overnight, and yes "major changes to subconscious programming overnight" is one of the things I've seen to be possible if you hit the right buttons. And of course, if you can do major things overnight, there are some even more major things you find yourself being able to do at all, and you couldn't before.

Comment author: entirelyuseless 25 October 2017 01:43:40PM 0 points [-]

This might be a violation of superrationality. If you hack yourself, in essence a part of you is taking over the rest. But if you do that, why shouldn't part of an AI hack the rest of it and take over the universe?

Comment author: Stuart_Armstrong 24 October 2017 03:18:58PM 0 points [-]

Some practical examples of what you mean could be useful.

Comment author: SquirrelInHell 24 October 2017 04:37:52PM *  1 point [-]

I'm planning to write some practical guides based on what I have learned, here's one: http://bewelltuned.com/tune_your_motor_cortex (it's a very powerful skill that I suspect is pretty close to impossible to discover using "normal" methods, though it seems possible to execute when you already know it)

Comment author: entirelyuseless 24 October 2017 12:50:33PM 0 points [-]

I entirely disagree that "rationalists are more than ready." They have exactly the same problems that a fanatical AI would have, and should be kept sandboxed for similar reasons.

(That said, AIs are unlikely to actually be fanatical.)

Comment author: SquirrelInHell 24 October 2017 02:11:49PM *  0 points [-]

Meh, kinda agree, added "(at least some of them!)" to the post.

I didn't mean "ready" in the sense of value alignment, but rather that by accessing more power they would grow instead of destroying themselves.