Douglas_Knight2

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Eliezer tries to derive his morality from stated human values.

In theory, Eliezer's morality (at least CEV) is insensitive to errors along these lines, but when Eliezer claims "it all adds up to normality," he's making a claim that is sensitive to such an error.

Does anyone have a reputable source for Feynman's 137? google makes it look very concentrated in this group, probably the result of a single confabulation.

Sykes and Gleick's biographies both give 12x. Sykes quotes Feynman's sister remembering sneaking into the records as a child. This seems important to me: Feynman didn't just fabricate the 12x.

Eliezer's password is [correct answer deleted, congratulations Douglas --EY].

the dominant consensus in modern decision theory is that one should two-box...there's a common attitude that "Verbal arguments for one-boxing are easy to come by, what's hard is developing a good decision theory that one-boxes"

Those are contrary positions, right?


Robin Hason:
Punishment is ordinary, but Newcomb's problem is simple! You can't have both.

The advantage of an ordinary situation like punishment is that game theorists can't deny the fact on the ground that governments exist, but they can claim it's because we're all irrational, which doesn't leave many directions to go in.

Nick Tarleton,
Yes, it is probably correct that one should devote substantial resources to low probability events, but what are the odds that the universe is not only a simulation, but that the containing world is much bigger; and, if so, does the universe just not count, because it's so small? The bounded utility function probably reaches the opposite conclusion that only this universe counts, and maybe we should keep our ambitions limited, out of fear of attracting attention.

Luis Enrique, See above about "We Change Our Minds Less Often Than We Think"; my interpretation is that the people are trying to believe that they haven't made up their minds, but they are wrong. That is, they seem to be implementing the (first) advice you mention. Maybe one can come up with more practical advice, but these are very difficult problems to fix, even if you understand the errors. On the other hand, the main part of the post is about a successful intervention.

Science was weaker in these days

Could you elaborate on this? What do you mean by Science? (reasoning? knowledge?)

The thing whose weakness seems relevant to me is a cultural tradition of doubting religion. Also, prerequisites which I have trouble articulating because they are so deeply buried: perhaps a changing notion of benevolence.

You probably won't go far wrong if you assume I agree with you on the points I don't respond to. I probably shouldn't have talked about them in the first place.

overcoming heuristics:
If we know a bias is caused by a heuristic, then we should use that heuristic less. But articulating a meta-heuristic about when to use it is very different implementing that meta-heuristic. Human minds aren't eurisko that can dial up the strength on heuristics. Even if we implement a heuristic, as in Kahneman's planning anecdote, and get more accurate information, we may simply ignore it.
The basic problem is system 1 vs 2, when to analyze a problem and when to trust unconscious systems. Perhaps we have control over the conscious analysis (but still, it has unconscious inputs). But even starting the process, turning on the conscious process, must be done from an unconscious start.

Tom Myers,
Systematic but unexplained: sure, most errors are probably due to heuristics, but I'm not sure that's a useful statement. A number of posts here have been so specific, they don't seem like useful starting points for searching for heuristics.

Cost:
Most people don't seem to have sufficiently clear goals to make sense of whether something benefits or costs them, let alone balancing the two.
People live normal lives by not thinking too much about things, so it shouldn't be so surprising that they don't think in psych experiments in which it is often clear that analysis will help. But if one is interested in producing answers to questions that don't come up in normal life (eg, how much is medicine worth), avoiding everyday heuristics is probably worth the cost. Heuristics may well be worth overcoming in everyday life, as well, but I don't think any experiments I've heard about shed any light on this.

Torture:
Your proposals are too detailed. I don't imagine an opportunity to experiment enough to figure out how to structure torture, and if I do get an opportunity to experiment on government structure, torture is not going to be high on my list of variables. A government is an extremely expensive experimental apparatus. At least I can imagine how to experiment with corporal punishment, but I don't really have much of an idea of how one could go about comparing the efficacy of different interrogation methods or the general investigative qualities of, say, American and Japanese police.

I'm not inclined to find out what you mean by Saddam's people-shredders, but I imagine that one effect was a deterrent to crime, especially crime that would get Saddam's attention. Torture, especially creative torture with vivid imagery, may well exploit salience(?) biases to be a more effective deterrent (aside from the rationally greater desire to avoid torture+death than to avoid death). The role of whim on one's fate may also have (irrationally) increased the deterrent effect. The vague beliefs people hold about prison rape may play a very similar role in the US system. We do have arbitrary torture in our criminal justice system already.

Winston survives.

Tom Myers,
I think the convention on this blog, among the small set of people who have such a precise definition, is that not every heuristic is a bias, that only heuristics whose errors are worth overcoming should be called biases. I don't like this usage. For one thing, it's really hard to know the cost of overcoming particular biases. It's easy to look at an isolated experiment and say: here's a procedure that would have been better, but that doesn't cover cost of actually changing your behavior to look out for this kind of error.

Also, there are other people on this blog that use bias to refer to systematic, but unexplained errors, where it's hard to call them heuristics. Without a mechanism, it's probably hard to overcome these, although not always impossible.

Load More