You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ialdabaoth comments on Rationality, Transhumanism, and Mental Health - Less Wrong Discussion

8 Post author: ialdabaoth 14 October 2012 09:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: ialdabaoth 15 October 2012 01:10:46AM 2 points [-]

This sounds pretty similar to a lot of my problems. Using this community's terminology, I can have all the beliefs I want, but if I have sufficiently powerful overriding aliefs, I'm screwed - since the alief-guided motivational system is actually closer to the motor control subprocessors than the belief-guided motivational system (aka "Amygdala hijack").

Worse, the alief-driven submodule is operating on its own utility table, which often is a nearly antiparallel eigenvector to my belief-driven submodule's utility table. So I have two submodules each with strong impetus vectors towards/away from various attractors within the solution domain, and... well, thrashing happens.

Comment author: MixedNuts 15 October 2012 09:46:54AM 0 points [-]

Yeah, it's supposed to do that. It's kind of a problem when you have to unplug the TV to get work done, or to change departments to avoid letting the hot coworker seduce you. It does have advantages when you're not very good at lofty decisions, though; you can see the problem with an organism that can just decide eating is wrong and starve to death.

People normally react to that by setting modest goals, acquiring the right habits to consistently achieve them, and then working their way up. Rewarding both systems ("After 50 minutes of work, eat a chocolate") also helps.