Roko comments on Spock's Dirty Little Secret - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (56)
Although these days Roko is probably uninterested in whether I agree with him, I agree with that passage.
According to my definition, "epistemically rational" means "effective at achieving one's goals". If the goals are incompatible with my goals, I'm going to hope that the agent remains epistemically irrational.
(Garcia used "intelligent" and "ethical" for my "epistemically rational" and "has goals compatible with my goals".)
Since 1971, Garcia's been stressing that increasing a person's epistemic rationality increases that person's capacity for good and capacity for evil, so you should try to determine whether the person will do good or do evil before you increase the epistemic rationality of the person. (Of course your definition of "good" might differ from mine.)
The smartest person (Ph. D. in math from a top program, successful entrepreneur) I ever met before I met Eliezer was probably unethical or evil. I say "probably" only to highlight that one cannot be highly confident of one's judgement about someone's ethics or evilness even if one has observed them closely. But most people here would probably agree with me that this person was unethical or evil.
Rationality can be bad when it's given to an agent with undesirable goals, but your own goals are always good to, so where your own thoughts are concerned, being 'rational' means they're good and being 'irrational' means they're bad. I think the article's statement was meant to apply only to thoughts evaluated from the inside.
Let's review the statement in question:
By "narrow down", I actually meant "narrow down prior to conscious evaluation" -- not consciously evaluate for truth or falsehood. You can consciously evaluate whatever you like, and you can certainly check a statement for factual accuracy without the use of emotion. But that's not what the sentence is talking about... it's referring to the sorting or scoring function of emotion in selecting what memories to retrieve, or hypotheses to consider, before you actually evaluate them.
The point of emotions -- which I see I failed to make sufficiently explicit in this post, from the frequent questions about it -- is that their original purpose was to prepare the body to take some physical, real-world action... and thus they were built in to our memory/prediction systems long before we reused those systems to "think" or "reason" with.
Brains weren't originally built for thinking -- they were built for emoting: motivating co-ordinated physical action.
Those are subsets of what you believe to be likely true.
And that's why it's a good thing to know what you're up against, with respect to the hardware upon which you're trying to do that.
No... the former merely sorts those hypotheses based on information from the latter. Or more precisely, the raw data from which those hypotheses are generated, has been stored in such a manner that retrieval is prioritized on emotion, and such that any such emotions are played back as an integral part of retrieval.
One's physio-emotional state at the time of retrieval also has an effect on retrieval priorities... if you're angry, for example, memories tagged "angry" are prioritized.