Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

SilasBarta comments on Why the beliefs/values dichotomy? - Less Wrong

20 Post author: Wei_Dai 20 October 2009 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (153)

You are viewing a single comment's thread.

Comment author: SilasBarta 20 October 2009 06:02:34PM 2 points [-]

I dispute your premise: what makes you so sure people do decompose their thoughts into beliefs and values, and find these to be natural, distinct categories? Consider the politics as mind-killer phenomenon. That can be expressed as, "People put your words into a broader context of whether they threaten their interests, and argue for or against your statements on that basis."

For example, consider the difficulty you will have communicating your position if you believe both a) global warming is unlikely to cause any significant problems in the business-as-usual scenario, b) high taxes on CO2 emissions should be levied. (e.g., you believe it's a good idea as an insurance policy and can be done in a way that blocks most of the economic damage)

(Yes, I had to use a present example to make the reactions easier to imagine.)

The "ought" is so tightly coupled to the "is", that in any case where the "ought" actually matters, the "is" comes along for the ride.

Note: this is related to the problem I had with the exposition of could/would/should agents: if you say humans are CSAs, what's an example of an intelligent agent that isn't?

Comment author: thomblake 20 October 2009 08:43:49PM *  1 point [-]

I'm confused about this. Consider these statements:

A. "I believe that my shirt is red."
B. "I value cheese."

Are you claiming that:

  1. People don't actually make statements like A
  2. People don't actually make statements like B
  3. A is expressing the same sort of fact about the world as B
  4. Statements like A and B aren't completely separate; that is, they can have something to do with one another.

If you strictly mean 1 or 2, I can construct a counterexample. 3 is indeed counterintuitive to me. 4 seems uncontroversial (the putative is/ought problem aside)

Comment author: SilasBarta 20 October 2009 10:01:45PM 1 point [-]

If I had to say, it would be a strong version of 4: in conceptspace, people naturally make groupings that put is- and ought-statements together. But looking back at the post, I definitely have quite a bit to clarify.

When I refer to what humans do, I'm trying to look at the general case. Obviously, if you direct someone's attention to the issue of is/ought, then they can break down thoughts into values and beliefs without much training. However, in the absence of such a deliberate step, I do not think people normally make a distinction.

I'm reminded of the explanation in pjeby's earlier piece: people instinctively put xml-tags of "good" or "bad" onto different things, blurring the distinction between "X is good" and "Y is a reason to deem X good". That is why we have to worry about the halo effect, where you disbelieve everything negative about something you value, even if such negatives are woefully insufficient to justify not valuing it.

From the computational perspective, this can be viewed as a shortcut to having to methodically analyze all the positives and negatives of any course of action, and getting stuck thinking instead of acting. But if this is how the mind really works, it's not really reducible to a CSA, without severe stretching of the meaning.

Comment author: DanArmak 20 October 2009 08:35:29PM 1 point [-]

Seconded. Sometimes I don't even feel I have fully separate beliefs and values. For instance, I'm often willing to change my beliefs to achieve my values (e.g., by believing something I have no evidence for, to become friends with other people who believe it - and yes, ungrounded beliefs can be adopted voluntarily to an extent.)

Comment author: SforSingularity 26 October 2009 07:53:07AM 0 points [-]

ungrounded beliefs can be adopted voluntarily to an extent.

I cannot do this, and I don't understand anyone who can. If you consciously say "OK, it would be really nice to believe X, now I am going to try really hard to start believing it despite the evidence against it", then you already disbelieve X.

Comment author: DanArmak 26 October 2009 08:39:19PM *  1 point [-]

I already disbelieve X, true, but I can change that. Of course it doesn't happen in a moment :-)

Yes, you can't create that feeling of rational knowledge about X from nothing. But if you can retreat from rationality - to where most people live their lives - and if you repeat X often enough, and you have no strongly emotional reason not to believe X, and your family and peers and role models all profess X, and X behaves like a good in-group distinguishing mark - then I think you have a good chance of coming to believe X. The kind of belief associated with faith and sports team fandom.

It's a little like the recent thread where someone, I forget who, described an (edit: hypothetical) religious guy who when drunk confessed that he didn't really believe in god and was only acting religious for the social benefits. Then people argued that no "really" religious person would honestly say that, and other people argued that even if he said that what does it mean if he honestly denies it whenever he's sober?

In the end I subscribe to the "PR consciousness" theory that says consciousness functions to create and project a self-image that we want others to believe in. We consciously believe many things about ourselves that are completely at odds with how we actually behave and the goals we actually seek. So it would be surprising if we couldn't invoke these mechanisms in at least some circumstances.

Comment author: Douglas_Knight 27 October 2009 12:59:31AM 2 points [-]

someone, I forget who, described a religious guy who when drunk confessed that he didn't really believe in god and was only acting religious for the social benefits.

generalizing from fictional evidence

Comment author: DanArmak 27 October 2009 09:12:44AM 1 point [-]

When I wrote that I was aware that it was a fictional account deliberately made up to illustrate a point. I didn't mention that, though, so I created fictional evidence. Thanks for flagging this, and I should be more careful!

Comment author: RobinZ 27 October 2009 01:07:19AM 1 point [-]

Worse: fictional evidence flagged as nonfictional -- like Alicorn's fictional MIT classmates that time.

Comment author: Alicorn 27 October 2009 01:10:35AM *  3 points [-]

My what now? I think that was someone else. I don't think I've been associated with MIT till now.

MIT not only didn't accept me when I applied, they didn't even reject me. I never heard back from them yea or nay at all.

Comment author: Yvain 27 October 2009 01:15:40AM 2 points [-]

That was me.

Of course, irony being what it is, people will now flag the Alicorn - MIT reference as nonfictional, and be referring to Alicorn's MIT example for the rest of LW history :)

Comment author: RobinZ 27 October 2009 01:31:34AM 2 points [-]

Attempting to analyze my own stupidity, I suspect my confusion came from (1) both Alicorn and Yvain being both high-karma contributors and (2) Alicorn's handle coming more readily to mind, both because (a) I interacted more with her and (b) the pronunciation of "Alicorn" being more obvious than that of "Yvain".

In other words, I have no evidence that this was anything other than an ordinary mistake.

Comment author: Alicorn 27 October 2009 01:35:48AM 1 point [-]

I've been imagining "Yvain" to be pronounced "ee-vane". I'd be interested in hearing a correction straight from the ee-vane's mouth if this is not right, though ;) I've heard people mispronounce "Alicorn" on multiple occasions.

Comment author: RobinZ 27 October 2009 01:18:33AM 0 points [-]

*checks*

Yvain's fictional MIT classmates.

I swear that wasn't on purpose.

Comment author: SilasBarta 27 October 2009 02:36:53AM 0 points [-]

What's fictional about that?

Ready to pony up money for a bet that I can't produce a warm body meeting that description?

Comment author: RobinZ 27 October 2009 02:42:47AM *  0 points [-]

I prefer not to gamble, but just to satisfy my own curiosity: what would the controls be on such a bet? Presumably you would have to prove to Knight's satisfaction that your unbelieving belief-signaler was legitimately thus.

Comment author: SilasBarta 27 October 2009 03:09:23AM 0 points [-]

I think my evidence is strong enough I can trust Douglas_Knight's own intellectual integrity.

Comment author: Douglas_Knight 27 October 2009 05:25:31AM 3 points [-]

I think my evidence is strong enough I can trust Douglas_Knight's own intellectual integrity.

Huh. My last couple of interactions with you, you called me a liar.

Comment author: SilasBarta 27 October 2009 03:26:08PM 0 points [-]

Okay, I found what I think you're referring to. Probably not my greatest moment here, but Is that really something you want sympathy for? Here's the short version of what happened.

You: If you think your comment was so important, don't leave it buried deep in the discussion, where nobody can see it.

Me: But I also linked to it from a more visible place. Did you not know about that?

You: [Ignoring previous mischaracterization] Well, that doesn't solve the problem of context. I clicked on it and couldn't understand it, and it seemed boring.

Me: Wait, you claim to be interested in a solution, I post a link saying I have one, and it's too much of a bother to read previous comments for context? That doesn't make sense. Your previous comment implies you didn't know about the higher link. Don't dig yourseelf deeper by covering it up.

Comment author: SilasBarta 27 October 2009 01:57:37PM -1 points [-]

Well, what possessed you to lie to me? ;-)

j/k, j/k, you're good, you're good.

A link would be nice though.

And I believe that, even taking into account any previous mistrust I might have had of you, I think my evidence is still strong enough that I can trust you consider it conclusive.