Tyrrell_McAllister comments on Some Thoughts Are Too Dangerous For Brains to Think - Less Wrong

15 Post author: WrongBot 13 July 2010 04:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (311)

You are viewing a single comment's thread. Show more comments above.

Comment author: Tyrrell_McAllister 13 July 2010 10:59:45PM *  4 points [-]

This, primarily. At least obviously wrong by my value system where believing true things is a core value.

I really don't think that the OP can be called "obviously wrong". For example, your brain is imperfect, so it may be that believing some true things makes it less likely that you will believe other more important true things. Then, even if your core value is to believe true things, you are going to want to be careful about letting the dangerous beliefs into your head.

And the circularity that WrongBot and Vladimir Nesov have pointed out rears its head here, too. Suppose that the possibility that I pose above is true. Then, if you knew this, it might undermine the extent to which you hold believing true things to be a core value. That is precisely the kind of unwanted utility-function change that Wrongbot is warning us about.

It's probably too pessimistic to say that you could never believe the dangerous true things. But it seems reasonably possible that some true beliefs are too dangerous unless you are very careful about the way in which you come to believe them. It may be unwise to just charge in and absorb true facts willy-nilly.

Here's another way to come at WrongBot's argument. It's obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know. It's not obvious, but it is at least plausible, that the "harm" could be that the other person's utility function would change in a way that we don't want. At least, this is certainly not obviously wrong. The final step in the argument is then to acknowledge that the "other person" might be the part of yourself over which you do not have perfect control — which is, after all, most of you.

Comment author: mattnewport 14 July 2010 12:02:00AM *  1 point [-]

It's obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know.

I believe some other people's reports that there are things they would prefer not to know and would be inclined to honor their preference if I knew such a secret but I can't think of any examples of such secrets for myself. In almost all cases I can think of I would want to be informed of any true information that was being withheld from me. The only possible exceptions are 'pleasant surprises' that are being kept secret on a strictly time-limited basis to enhance enjoyment (surprise gifts, parties, etc.) but I think these are not really what we're talking about.

I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the 'more harm than good' would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld. Sometimes keeping such secrets might be the 'right thing' morally (the Nazi at the door looking for fugitives) but this is not because you are acting in the interests of the party from whom you are keeping information.

Comment author: Tyrrell_McAllister 14 July 2010 12:50:06AM *  5 points [-]

In almost all cases I can think of I would want to be informed of any true information that was being withheld from me.

Maybe this is an example:

I was once working hard to meet a deadline. Then I saw in my e-mail that I'd just received the referee reports for a journal article that I'd submitted. Even when a referee report recommends acceptance, it will almost always request changes, however minor. I knew that if I looked at the reports, I would feel a very strong pull to work on whatever was in them, which would probably take at least several hours. Even if I resisted this pull, resistance alone would be a major tax on my attention. My brain, of its own accord, would grab mental CPU cycles from my current project to compose responses to whatever the referees said. I decided that I couldn't spare this distraction before I met my deadline. So I left the reports unread until I'd completed my project.

In short, I kept myself ignorant because I expected that knowledge of the reports' contents would induce me to pursue the wrong actions.

Comment author: mattnewport 14 July 2010 01:05:06AM *  4 points [-]

This is an example of a pretty different kind of thing to what WrongBot is talking about. It's a hack for rationing attention or a technique for avoiding distraction and keeping focus for a period of time. You read the email once your current time-critical priority was dealt with, you didn't permanently delete it. Such tactics can be useful and I use them myself. It is quite different from permanently avoiding some information for fear of permanent corruption of your brain.

I'm a little surprised that you would have thought that this example fell into the same class of things as WrongBot or I were talking about. Perhaps we need to define what kinds of 'dangerous thought' we are talking about a little more clearly. I'm rather bemused that people are conflating this kind of avoidance of viscerally unpleasant experiences with 'dangerous thoughts' as well. It seems others are interpreting the scope of the article massively more broadly than I am.

Comment author: ABranco 19 July 2010 04:55:32AM *  3 points [-]

Or putting it differently:

  • One thing is to operationally avoid gaining certain data at a certain moment in order to better function overall. Because we need to keep our attention focused.

  • Another thing is to strategically avoid gaining certain kinds of information that could possibly lead us astray.

I'd guess most people here agree with this kind of "self-deception" that the former entails. And it seems that the post is arguing pro this kind of "self-deception" in the latter case as well, although there isn't as much consensus — some people seem to welcome any kind of truth whatsoever, at any time.

However... It seems to me now that, frankly, both cases are incredibly similar! So I may be conflating them, too.

The major difference seems to be the scale adopted: checking your email is an information hazard at that moment, and you want to postpone it for a couple of hours. Knowing about certain truths is an information hazard at this moment, and you want to postpone it for a couple of... decades. If ever. When your brain is stronger enough to handle it smoothly.

It all boils down to knowing we are not robots, that our brains are a kludge, and that certain stimuli (however real or true) are undesired.

Comment author: Tyrrell_McAllister 14 July 2010 01:32:53AM *  3 points [-]

This is an example of a pretty different kind of thing to what WrongBot is talking about.

I think that you can just twiddle some parameters with my example to see something more like WrongBot's examples. My example had a known deadline, after which I knew it would be safe to read the reports. But suppose that I didn't know exactly when it would be safe to read the reports. My current project is the sort of thing where I don't currently know when I will have done enough. I don't yet know what the conditions for success are, so I don't yet know what I need to do to create safe conditions to read the reports. It is possible that it will never be safe to read the reports, that I will never be able to afford the distraction of suppressing my brain's desire to compose responses.

My understanding is that WrongBot views group-intelligence differences analogously. The argument is that it's not safe to learn such truths now, and we don't yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we've figured out the safe conditions. That is my reading of the argument.

Comment author: WrongBot 14 July 2010 03:35:46AM 4 points [-]

More or less. I'm generally sufficiently optimistic about the future that I don't think that there are kinds of true knowledge that will continue to be dangerous indefinitely; I'm just trying to highlight things I think might not be safe right now, when we're all stuck doing serious thinking with opaquely-designed sacks of meat.

Comment author: HughRistik 14 July 2010 05:06:11AM 1 point [-]

Like Matt, I don't think your example does the same thing as WrongBot's, even with your twiddling.

WrongBot doesn't want the "dangerous thoughts" to influence him to revise his beliefs and values. That wasn't the case for you: you didn't want to avoid revising your beliefs about your paper; you just didn't want to deal with the cognitive distraction of it during the short term. If you avoided reading your reports because you wanted to avoid believing that your article needed any improvement, then I think your situation would be more analogous to WrongBot's.

The argument is that it's not safe to learn such truths now, and we don't yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we've figured out the safe conditions.

But there's another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon. That's not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.

Putting oneself at risk of error for a short and capped time frame is much different from putting oneself at risk of error indefinitely.

Comment author: Tyrrell_McAllister 14 July 2010 06:09:31AM *  2 points [-]

WrongBot doesn't want the "dangerous thoughts" to influence him to revise his beliefs and values. That wasn't the case for you: you didn't want to avoid revising your beliefs about your paper; you just didn't want to deal with the cognitive distraction of it during the short term.

The beliefs that I didn't want to revise were my beliefs about the contents of the reports. Before I read them, my beliefs about their contents were general and vague. Were I to read the reports, I would have specific knowledge about what they said. My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project. Despite my intention to focus solely on my current project, my brain would allocate significant resources to composing responses to what I'd read in the reports.

But there's another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon.

But in the "twiddled" version, I don't know when the safe conditions will occur . . .

That's not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.

To be fair, WrongBot thinks that we will be able to learn this knowledge eventually. We just shouldn't take it as obvious that we know what the safe conditions are yet.

Comment author: HughRistik 14 July 2010 06:52:45AM 1 point [-]

I still say that there is a difference between what you and WrongBot are doing, even if you're successfully shooting down my attempts to articulate it. I might need a few more tries to be able to correctly articulate that intuition.

My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project.

These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.

But in the "twiddled" version, I don't know when the safe conditions will occur . . .

True, but there wasn't the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.

I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won't read it because it might change my values, at least not only the conditions are safe for me. If I say that I can't read it this week because I have a deadline, but maybe next week, you'll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?

It seems that since we recognize that rationalists are human, we can and should give them a pass on scrutinizing certain thoughts or investigating certain ideas when they are under time pressure or emotional pressure in the short term, like in your example. But how long can one dodge inquiry in a certain area before one's rationalist creds become suspect?

Comment author: Tyrrell_McAllister 14 July 2010 05:54:15PM *  0 points [-]

My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project.

These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.

I'm having trouble seeing this distinction. What if I had a moral obligation to do as well as possible on my current project, because people were depending on me, say? My concern would be that, if I read the reports, I would feel a pull to act immorally. I might even rationalize away the immorality under the influence of this pull. In effect, I would act according to different moral values. Would that make the situation more analogous in your view, or would something still be missing?

I'm getting the sense that the problem with my example is that it has nothing to do with political correctness. Is it key for you that WrongBot wants to keep information out of his/her brain because of political correctness specifically?

But in the "twiddled" version, I don't know when the safe conditions will occur . . .

True, but there wasn't the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.

I called it a "twiddled" version because I was thinking of the uncertainty as a continuous parameter that I could set to a wide spectrum of values. In the actual situation, the dial was pegged at "almost complete certainty". But I can imagine situations where I'm very uncertain. It looks like part of your problem with this is that such a quantitative change amounts to a qualitative change in your view. Is that right?

I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won't read it because it might change my values, at least not only the conditions are safe for me. If I say that I can't read it this week because I have a deadline, but maybe next week, you'll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?

I take it that your concern would be that losing creationism would change your moral values in a dangerous way. Whether you are being rational then depends on what "put off reading it indefinitely" means. I would say that you are being rational to avoid the book for now only if you are making a good-faith effort to determine rationally the conditions under which it would be safe to read the book, with the intention of reading the book once you've found sufficiently safe conditions.

Comment author: mattnewport 14 July 2010 06:54:20PM *  1 point [-]

Part of the problem I'm having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot's examples. While they share certain similarities it appears roughly equivalent to a discussion about losing your entire life savings which you are comparing to the time you dropped a dime down the back of the sofa.

Sometimes a sufficiently large difference of magnitude can be treated for most purposes as a difference in kind.

Comment author: Tyrrell_McAllister 14 July 2010 12:35:31AM 2 points [-]

I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the 'more harm than good' would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld.

But this is the way to think of WrongBot's claim. The conscious you, the part over which you have deliberate control, is but a small part of the goal-seeking activity that goes on in your brain. Some of that goal-seeking activity is guided by interests that aren't really yours. Sometimes you ought to ignore the interests of these other agents in your brain. There is some possibility that you should sometimes do this by keeping information from reaching those other agents, even though this means keeping the information from yourself as well.