Roko comments on Spock's Dirty Little Secret - Less Wrong

46 Post author: pjeby 25 March 2009 07:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread.

Comment deleted 25 March 2009 11:39:46PM *  [-]
Comment author: rhollerith 26 March 2009 01:37:54AM *  2 points [-]

Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way.

Although these days Roko is probably uninterested in whether I agree with him, I agree with that passage.

According to my definition, "epistemically rational" means "effective at achieving one's goals". If the goals are incompatible with my goals, I'm going to hope that the agent remains epistemically irrational.

(Garcia used "intelligent" and "ethical" for my "epistemically rational" and "has goals compatible with my goals".)

Since 1971, Garcia's been stressing that increasing a person's epistemic rationality increases that person's capacity for good and capacity for evil, so you should try to determine whether the person will do good or do evil before you increase the epistemic rationality of the person. (Of course your definition of "good" might differ from mine.)

The smartest person (Ph. D. in math from a top program, successful entrepreneur) I ever met before I met Eliezer was probably unethical or evil. I say "probably" only to highlight that one cannot be highly confident of one's judgement about someone's ethics or evilness even if one has observed them closely. But most people here would probably agree with me that this person was unethical or evil.

Comment author: jimrandomh 26 March 2009 01:45:07AM *  0 points [-]

"good" or "bad", or even "rational" and "irrational". (Which of course are just disguised versions of "good" and "bad", if you're a rationalist.) I saw this, and felt a strong urge to walk to work where my laptop is and correct it.

Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way.

Rationality can be bad when it's given to an agent with undesirable goals, but your own goals are always good to, so where your own thoughts are concerned, being 'rational' means they're good and being 'irrational' means they're bad. I think the article's statement was meant to apply only to thoughts evaluated from the inside.

Comment author: pjeby 26 March 2009 12:00:42AM 0 points [-]

This statement is either false or meaningless, depending on how you interpret "emotion".

Let's review the statement in question:

Without emotion, you have no way to narrow down the field of "all possible hypotheses" to "potentially useful hypotheses" or "likely to be true" hypotheses...

By "narrow down", I actually meant "narrow down prior to conscious evaluation" -- not consciously evaluate for truth or falsehood. You can consciously evaluate whatever you like, and you can certainly check a statement for factual accuracy without the use of emotion. But that's not what the sentence is talking about... it's referring to the sorting or scoring function of emotion in selecting what memories to retrieve, or hypotheses to consider, before you actually evaluate them.

Comment deleted 26 March 2009 12:36:00AM [-]
Comment author: pjeby 26 March 2009 12:58:03AM *  3 points [-]

The point of emotions -- which I see I failed to make sufficiently explicit in this post, from the frequent questions about it -- is that their original purpose was to prepare the body to take some physical, real-world action... and thus they were built in to our memory/prediction systems long before we reused those systems to "think" or "reason" with.

Brains weren't originally built for thinking -- they were built for emoting: motivating co-ordinated physical action.

Comment deleted 26 March 2009 12:37:36AM *  [-]
Comment author: pjeby 26 March 2009 12:53:31AM 0 points [-]

think that emotions often do the opposite. They narrow down the field of "all possible hypotheses" to "likely to make me feel good about myself if I believe it" hypotheses and "likely to support my preexisting biases about the world" hypotheses, which is precisely the problem that this site is tackling... if emotions subconsciously selected "likely to be true" hypotheses, we would not be in the somewhat problematic situation we are in.

Those are subsets of what you believe to be likely true.

Comment deleted 26 March 2009 01:51:04PM [-]
Comment author: pjeby 26 March 2009 03:15:33PM 0 points [-]

epistemic rationality is about believing things that are actually true, rather than believing things that you believe to be true.

And that's why it's a good thing to know what you're up against, with respect to the hardware upon which you're trying to do that.

Comment deleted 26 March 2009 03:27:43PM [-]
Comment author: pjeby 26 March 2009 04:42:34PM 2 points [-]

That which proposes hypotheses is not exactly the same piece of brainware as that which makes you laugh and cry and love

No... the former merely sorts those hypotheses based on information from the latter. Or more precisely, the raw data from which those hypotheses are generated, has been stored in such a manner that retrieval is prioritized on emotion, and such that any such emotions are played back as an integral part of retrieval.

One's physio-emotional state at the time of retrieval also has an effect on retrieval priorities... if you're angry, for example, memories tagged "angry" are prioritized.