New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 10:27 PM

I certanly think you're right, that the conscious mind and conscious decisions can to a large extent re-write a lot of programming of the brain.

I am surprised to think that you think that most rationalists don't think that. (That sentence is a mouthful, but you know what I mean.) A lot of rationalist writing is devoted to working on ways to do exactally that; a lot of people have written about how just reading the sequences helped them basically repogram their own brain to be more rational in a wide variety of situations.

Are there a lot of people in the rationalist community who think that conscious thought and decision making can't do major things? I know there are philosophers who think that maybe consciousness is irrelevant to behavior, but that philosophy seems very much at odds with LessWrong-style rationality and the way people on LessWrong tend ot think about and talk about what consciousness is.

Are there a lot of people in the rationalist community who think that conscious thought and decision making can't do major things?

It's not that they think it cannot do major things at all. They don't expect to do be able to do them overnight, and yes "major changes to subconscious programming overnight" is one of the things I've seen to be possible if you hit the right buttons. And of course, if you can do major things overnight, there are some even more major things you find yourself being able to do at all, and you couldn't before.

This might be a violation of superrationality. If you hack yourself, in essence a part of you is taking over the rest. But if you do that, why shouldn't part of an AI hack the rest of it and take over the universe?

Some practical examples of what you mean could be useful.

I'm planning to write some practical guides based on what I have learned, here's one: http://bewelltuned.com/tune_your_motor_cortex (it's a very powerful skill that I suspect is pretty close to impossible to discover using "normal" methods, though it seems possible to execute when you already know it)

I entirely disagree that "rationalists are more than ready." They have exactly the same problems that a fanatical AI would have, and should be kept sandboxed for similar reasons.

(That said, AIs are unlikely to actually be fanatical.)

Meh, kinda agree, added "(at least some of them!)" to the post.

I didn't mean "ready" in the sense of value alignment, but rather that by accessing more power they would grow instead of destroying themselves.