WrongBot comments on Some Thoughts Are Too Dangerous For Brains to Think - Less Wrong

15 Post author: WrongBot 13 July 2010 04:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (311)

You are viewing a single comment's thread.

Comment author: WrongBot 13 July 2010 09:09:49PM 3 points [-]

This post is seeing some pretty heavy downvoting, but the opinions I'm seeing in the comments so far seem to be more mixed; I suppose this isn't unusual.

I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was it structured badly? Are you incredibly annoyed by the formatting errors I can't figure out how to fix?

Those are broadly the sorts of answers I'm looking for. I am specifically not looking for justifications for downvotes; really, all I want is your help in becoming stronger. With luck, I will be able to waste less of your time in the future.

Thanks.

Comment author: jimrandomh 13 July 2010 10:53:29PM 9 points [-]

I think it would've been better received if some attention was given to defense mechanisms - ie, rather than phrasing it as some true things being unconditionally bad to know, phrase it as some true things being bad to know unless you have the appropriate prerequisites in place. For example, knowing about differences between races is bad unless you are very good at avoiding confirmation bias, and knowing how to detect errors in reasoning is bad unless you are very good at avoiding motivated cognition.

Comment author: Tyrrell_McAllister 13 July 2010 11:05:42PM *  6 points [-]

I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was it structured badly? Are you incredibly annoyed by the formatting errors I can't figure out how to fix?

I upvoted your post, because I think that you raise a possibility that we should consider. It should not be dismissed out of hand.

However, your examples do kind of suck :). As Sarah pointed out, none of us is likely to become a dictator, and dictators are probably not typical people. So the history of dictators is not great information about how we ought to tend to our epistemological garden. Your claims about how data on group differences in intelligence affect people would be strong evidence if it were backed up by more than anecdote and speculation. As it is, though, it is at least as likely that you are suffering from confirmation bias.

Comment author: WrongBot 14 July 2010 12:31:55AM 3 points [-]

Thank you. I should have held off on making the post for a few days and worked out better examples at the very least. I will do better.

Comment author: mattnewport 13 July 2010 09:40:43PM 4 points [-]

Was the argument being made just obviously wrong?

This, primarily. At least obviously wrong by my value system where believing true things is a core value. To the extent that this is also the value system of less wrong as a whole it seems contrary to the core values of the site without acknowledging the conflict explicitly enough.

I didn't think the examples were very good either. I think the argument is wrong even for value systems that place a lower value on truth than mine and the examples aren't enough to persuade me otherwise.

I also found the (presumably) joke about hunting down and killing anyone who disagrees with you jarring and in rather poor taste. I'm generally in favour of tasteless and offensive jokes but this one just didn't work for me.

Comment author: Vladimir_Nesov 13 July 2010 09:43:10PM *  4 points [-]

At least obviously wrong by my value system where believing true things is a core value.

Beware identity. It seems that a hero shouldn't kill, ever, but sometimes it's the right thing to do. Unless it's your sole value, there will be situations where it should give way.

Comment author: mattnewport 13 July 2010 09:58:52PM 0 points [-]

Unless it's your sole value, there will be situations where it should give way.

This seems like it should generally be true but in practice I haven't encountered any plausible examples where I prefer ignorance. This includes a number of hypotheticals where many people claim they would prefer ignorance which leads me to believe the value I place on truth is outside the norm.

Truth / knowledge is a little paradoxical in this sense as well. I believe that killing is generally wrong but there is no paradox in killing in certain situations because it appears to be the right choice. The feedback effect of truth on your decision making / value defining apparatus makes it unlike other core values that might sometimes be abandoned.

Comment author: Vladimir_Nesov 13 July 2010 10:01:07PM 0 points [-]

This seems like it should generally be true but in practice I haven't encountered any plausible examples where I prefer ignorance. This includes a number of hypotheticals where many people claim they would prefer ignorance which leads me to believe the value I place on truth is outside the norm.

I agree with this, my objection is to the particular argument you used, not necessarily the implied conclusion.

Comment author: Tyrrell_McAllister 13 July 2010 10:59:45PM *  4 points [-]

This, primarily. At least obviously wrong by my value system where believing true things is a core value.

I really don't think that the OP can be called "obviously wrong". For example, your brain is imperfect, so it may be that believing some true things makes it less likely that you will believe other more important true things. Then, even if your core value is to believe true things, you are going to want to be careful about letting the dangerous beliefs into your head.

And the circularity that WrongBot and Vladimir Nesov have pointed out rears its head here, too. Suppose that the possibility that I pose above is true. Then, if you knew this, it might undermine the extent to which you hold believing true things to be a core value. That is precisely the kind of unwanted utility-function change that Wrongbot is warning us about.

It's probably too pessimistic to say that you could never believe the dangerous true things. But it seems reasonably possible that some true beliefs are too dangerous unless you are very careful about the way in which you come to believe them. It may be unwise to just charge in and absorb true facts willy-nilly.

Here's another way to come at WrongBot's argument. It's obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know. It's not obvious, but it is at least plausible, that the "harm" could be that the other person's utility function would change in a way that we don't want. At least, this is certainly not obviously wrong. The final step in the argument is then to acknowledge that the "other person" might be the part of yourself over which you do not have perfect control — which is, after all, most of you.

Comment author: mattnewport 14 July 2010 12:02:00AM *  1 point [-]

It's obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know.

I believe some other people's reports that there are things they would prefer not to know and would be inclined to honor their preference if I knew such a secret but I can't think of any examples of such secrets for myself. In almost all cases I can think of I would want to be informed of any true information that was being withheld from me. The only possible exceptions are 'pleasant surprises' that are being kept secret on a strictly time-limited basis to enhance enjoyment (surprise gifts, parties, etc.) but I think these are not really what we're talking about.

I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the 'more harm than good' would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld. Sometimes keeping such secrets might be the 'right thing' morally (the Nazi at the door looking for fugitives) but this is not because you are acting in the interests of the party from whom you are keeping information.

Comment author: Tyrrell_McAllister 14 July 2010 12:50:06AM *  5 points [-]

In almost all cases I can think of I would want to be informed of any true information that was being withheld from me.

Maybe this is an example:

I was once working hard to meet a deadline. Then I saw in my e-mail that I'd just received the referee reports for a journal article that I'd submitted. Even when a referee report recommends acceptance, it will almost always request changes, however minor. I knew that if I looked at the reports, I would feel a very strong pull to work on whatever was in them, which would probably take at least several hours. Even if I resisted this pull, resistance alone would be a major tax on my attention. My brain, of its own accord, would grab mental CPU cycles from my current project to compose responses to whatever the referees said. I decided that I couldn't spare this distraction before I met my deadline. So I left the reports unread until I'd completed my project.

In short, I kept myself ignorant because I expected that knowledge of the reports' contents would induce me to pursue the wrong actions.

Comment author: mattnewport 14 July 2010 01:05:06AM *  4 points [-]

This is an example of a pretty different kind of thing to what WrongBot is talking about. It's a hack for rationing attention or a technique for avoiding distraction and keeping focus for a period of time. You read the email once your current time-critical priority was dealt with, you didn't permanently delete it. Such tactics can be useful and I use them myself. It is quite different from permanently avoiding some information for fear of permanent corruption of your brain.

I'm a little surprised that you would have thought that this example fell into the same class of things as WrongBot or I were talking about. Perhaps we need to define what kinds of 'dangerous thought' we are talking about a little more clearly. I'm rather bemused that people are conflating this kind of avoidance of viscerally unpleasant experiences with 'dangerous thoughts' as well. It seems others are interpreting the scope of the article massively more broadly than I am.

Comment author: ABranco 19 July 2010 04:55:32AM *  3 points [-]

Or putting it differently:

  • One thing is to operationally avoid gaining certain data at a certain moment in order to better function overall. Because we need to keep our attention focused.

  • Another thing is to strategically avoid gaining certain kinds of information that could possibly lead us astray.

I'd guess most people here agree with this kind of "self-deception" that the former entails. And it seems that the post is arguing pro this kind of "self-deception" in the latter case as well, although there isn't as much consensus — some people seem to welcome any kind of truth whatsoever, at any time.

However... It seems to me now that, frankly, both cases are incredibly similar! So I may be conflating them, too.

The major difference seems to be the scale adopted: checking your email is an information hazard at that moment, and you want to postpone it for a couple of hours. Knowing about certain truths is an information hazard at this moment, and you want to postpone it for a couple of... decades. If ever. When your brain is stronger enough to handle it smoothly.

It all boils down to knowing we are not robots, that our brains are a kludge, and that certain stimuli (however real or true) are undesired.

Comment author: Tyrrell_McAllister 14 July 2010 01:32:53AM *  3 points [-]

This is an example of a pretty different kind of thing to what WrongBot is talking about.

I think that you can just twiddle some parameters with my example to see something more like WrongBot's examples. My example had a known deadline, after which I knew it would be safe to read the reports. But suppose that I didn't know exactly when it would be safe to read the reports. My current project is the sort of thing where I don't currently know when I will have done enough. I don't yet know what the conditions for success are, so I don't yet know what I need to do to create safe conditions to read the reports. It is possible that it will never be safe to read the reports, that I will never be able to afford the distraction of suppressing my brain's desire to compose responses.

My understanding is that WrongBot views group-intelligence differences analogously. The argument is that it's not safe to learn such truths now, and we don't yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we've figured out the safe conditions. That is my reading of the argument.

Comment author: WrongBot 14 July 2010 03:35:46AM 4 points [-]

More or less. I'm generally sufficiently optimistic about the future that I don't think that there are kinds of true knowledge that will continue to be dangerous indefinitely; I'm just trying to highlight things I think might not be safe right now, when we're all stuck doing serious thinking with opaquely-designed sacks of meat.

Comment author: HughRistik 14 July 2010 05:06:11AM 1 point [-]

Like Matt, I don't think your example does the same thing as WrongBot's, even with your twiddling.

WrongBot doesn't want the "dangerous thoughts" to influence him to revise his beliefs and values. That wasn't the case for you: you didn't want to avoid revising your beliefs about your paper; you just didn't want to deal with the cognitive distraction of it during the short term. If you avoided reading your reports because you wanted to avoid believing that your article needed any improvement, then I think your situation would be more analogous to WrongBot's.

The argument is that it's not safe to learn such truths now, and we don't yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we've figured out the safe conditions.

But there's another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon. That's not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.

Putting oneself at risk of error for a short and capped time frame is much different from putting oneself at risk of error indefinitely.

Comment author: Tyrrell_McAllister 14 July 2010 06:09:31AM *  2 points [-]

WrongBot doesn't want the "dangerous thoughts" to influence him to revise his beliefs and values. That wasn't the case for you: you didn't want to avoid revising your beliefs about your paper; you just didn't want to deal with the cognitive distraction of it during the short term.

The beliefs that I didn't want to revise were my beliefs about the contents of the reports. Before I read them, my beliefs about their contents were general and vague. Were I to read the reports, I would have specific knowledge about what they said. My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project. Despite my intention to focus solely on my current project, my brain would allocate significant resources to composing responses to what I'd read in the reports.

But there's another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon.

But in the "twiddled" version, I don't know when the safe conditions will occur . . .

That's not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.

To be fair, WrongBot thinks that we will be able to learn this knowledge eventually. We just shouldn't take it as obvious that we know what the safe conditions are yet.

Comment author: HughRistik 14 July 2010 06:52:45AM 1 point [-]

I still say that there is a difference between what you and WrongBot are doing, even if you're successfully shooting down my attempts to articulate it. I might need a few more tries to be able to correctly articulate that intuition.

My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project.

These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.

But in the "twiddled" version, I don't know when the safe conditions will occur . . .

True, but there wasn't the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.

I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won't read it because it might change my values, at least not only the conditions are safe for me. If I say that I can't read it this week because I have a deadline, but maybe next week, you'll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?

It seems that since we recognize that rationalists are human, we can and should give them a pass on scrutinizing certain thoughts or investigating certain ideas when they are under time pressure or emotional pressure in the short term, like in your example. But how long can one dodge inquiry in a certain area before one's rationalist creds become suspect?

Comment author: Tyrrell_McAllister 14 July 2010 12:35:31AM 2 points [-]

I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the 'more harm than good' would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld.

But this is the way to think of WrongBot's claim. The conscious you, the part over which you have deliberate control, is but a small part of the goal-seeking activity that goes on in your brain. Some of that goal-seeking activity is guided by interests that aren't really yours. Sometimes you ought to ignore the interests of these other agents in your brain. There is some possibility that you should sometimes do this by keeping information from reaching those other agents, even though this means keeping the information from yourself as well.

Comment author: mattnewport 14 July 2010 02:34:08AM *  10 points [-]

I've just identified something else that was nagging at me about this post: the irony of the author of this post making an argument that closely parallels an argument some thoughtful conservatives make against condoning alternative lifestyles like polyamory.

The essence of that argument is that humans are not sufficiently intelligent, rational or self-controlled to deal with the freedom to pursue their own happiness without the structure and limits imposed by evolved cultural and social norms that keep their baser instincts in check. That cultural norms exist for a reason (a kind of cultural selection for societies with norms that give them a competitive advantage) and that it is dangerous to mess with traditional norms when we don't fully understand why they exist.

I don't really subscribe to the conservative argument (though I have more sympathy for it than the argument made in this post) but it takes a similar form to this argument when it suggests that some things are too dangerous for mere humans to meddle with.

Comment author: WrongBot 14 July 2010 03:43:46AM 0 points [-]

While there are some superficial parallels, I don't think the two cases are actually very similar.

Humans don't have a polyamory-bias; if the scientific consensus on neurotransmitters like oxytocin and vasopressin is accurate, it's quite the opposite. Deliberate action in defiance of bias is not dangerous. There's no back door for evolution to exploit.

Comment author: MichaelVassar 15 July 2010 05:07:17PM 3 points [-]

This just seems unreasoned to me.

Comment author: WrongBot 15 July 2010 05:16:53PM 0 points [-]

Erm, how so?

It occurs to me that I should clarify that when I said

Deliberate action in defiance of bias is not dangerous.

I meant that it is not dangerous thinking of the sort I have attempted to describe.

Comment author: MichaelVassar 15 July 2010 06:19:32PM 6 points [-]

Maybe I just don't see the distinction or the argument that you are making, but I still don't. Do you really think that thinking about polyamory isn't likely to impact values somewhat relative to unquestioned monogamy?

Comment author: WrongBot 15 July 2010 06:45:29PM 0 points [-]

Oh, it's quite likely to impact values. But it won't impact your values without some accompanying level of conscious awareness. It's unconscious value shifts that the post is concerned about.

Comment author: [deleted] 22 February 2011 02:18:27AM 1 point [-]

How can you be so sure? As in I dissagree.

How people value different kinds of sexual behaviours seems to be very strongly influenced by the subconscious.