Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Zuckaschnegge 27 April 2012 07:31:36AM *  0 points [-]

What would the actual difference be? You have a subjective view of your emotions (and anything else anyway). so believing you are happy would be the same as being happy, as long as you are not aware of the fact that you are only believing in your happiness.

Comment author: alex_zag_al 05 November 2017 11:13:59PM 0 points [-]

I think that someone who merely believed they were happy, and then experienced real happiness, would not want to go back.

Comment author: alex_zag_al 24 February 2017 04:32:40PM 0 points [-]

There's an important category of choices: the ones where any good choice is "acting as if" something is true.

That is, there are two possible worlds. And there's one choice best if you knew you were in world 1, and another choice best if you knew you were in world 2. And, in addition, under any probabilistic mixture of the two worlds, one of those two choices is still optimal.

The hotel example falls into this category. So, one of the important reasons to recognize this category is to avoid a half-speed response to uncertainty.

Many choices don't fall into this category. You can tell because in many decision-making problems, gathering more information is a good decision. But, this is never acting as if you knew one of the possibilities for certain.

Arguably in your example, information-seeking actually was the best solution: pull over and take out a map or use a GPS.

It seems like another important category of choices is those where the best option is trying the world 1 choice for a specified amount of time and then trying the world 2 choice. Perhaps these are the choices where the best source of information is observing whether something works? Reminds me of two-armed bandit problems, where acting-as-if and investigating manifest in the same kind of choice (pulling a lever).

Comment author: James_Miller 24 February 2017 02:39:58PM 0 points [-]

Professors notice when you consistently arrive late to class. I have started closing the class door at the exact time my class starts to signal to students that lateness is something that does bother me.

Comment author: alex_zag_al 24 February 2017 02:52:14PM *  0 points [-]

Yeah. I mean, I'm not saying you should arrive late to class.

The way to work what you're saying into the framework is:

  • The cost of consistently arriving late is high

  • The cost (in minutes spent waiting for the class to start) of avoiding consistent lateness is less high

  • Therefore, you should pay this cost in minutes spent waiting

The point is to quantify the price, not to say you shouldn't pay it.

[Link] The price you pay for arriving to class on time

0 alex_zag_al 24 February 2017 02:11PM
Comment author: Lumifer 02 December 2016 04:22:38PM *  5 points [-]

Well, the discussion of the differences between the hard and the soft sciences is a complicated topic.

But very crudely, the soft sciences have to deal with situations which never exactly repeat, so their theories and laws are always approximate and apply "more or less". In particular, this makes it hard to falsify theories which leads to proliferation of just plain bullshit and idiosyncratic ideas which cannot be proven wrong and so continue their existence. Basically you cannot expect that a social science will reliably converge on truth the way a hard science will.

So if you pick, say, an undergraduate textbook in economics, what it tells you will depend on which particular textbook did you pick. Two people who read two different econ textbooks might well end up with very different ideas of how economics work and there is no guarantee that either of them will explain the real-world data well.

Comment author: alex_zag_al 23 February 2017 07:39:32AM 0 points [-]

the soft sciences have to deal with situations which never exactly repeat

This is also true of evolutionary biology--I think it's not widely recognized that evolutionary biology is like the soft sciences in this way.

Comment author: alex_zag_al 23 February 2017 06:09:55AM 0 points [-]

iii. Emphasize all rationality use cases evenly. Cause all people to be evenly targeted by CFAR workshops.

We can’t do this one either; we are too small to pursue all opportunities without horrible dilution and failure to capitalize on the most useful opportunities.

This surprised me, since I think of rationality as the general principles of truth-finding.

What have you found about the degree to which rationality instruction needs to be tailored to a use-case?

Comment author: alex_zag_al 23 February 2017 06:09:11AM *  0 points [-]

Several of these had the form “I, too, think that AI safety is incredibly important — and that is why I think CFAR should remain cause-neutral, so it can bring in more varied participants who might be made wary by an explicit focus on AI.”

I don't think that AI safety is important, which I guess makes me one of the "more varied participants made wary by an explicit focus on AI." Happy you're being explicit about your goals but I don't like them.

Comment author: Vaniver 15 December 2016 12:40:06AM *  5 points [-]

There is a third component to actually knowing a lot about AI, which is having succeeded in having learnt about AI, which is to say, having "won" in a certain sense. If rationality is winning, or knowing how to use raw intelligence effectively, a baseline level of rationality is indicated.

Have you heard the anecdote about Kahneman and the planning fallacy? It's from Thinking Fast and Slow, and deals with him creating curriculum to teach judgment and decision-making in high school. He puts together a team of experts, they meet for a year, and have a solid outline. They're talking about estimating uncertain quantities, and he gets the bright idea of having everyone estimate how long it will take them until they submit a finished draft to the Ministry of Education. He solicits everyone's probabilities using one of the approved-by-research methods they're including in the curriculum, and their guesses are tightly centered around two years (ranging from about 1.5 to 2.5).

Then he decides to employ the outside view, and asks the curriculum expert how long it took similar teams in the past. That expert realizes that, in the past, about 40% of similar teams gave up and never finished; those who finished, no one took less than seven years to finish. (Kahneman tries to rescue them by asking about skills and resources, and turns out that this team is below average, but not by much.)

We should have quit that day. None of us was willing to invest six more years of work in a project with a 40% chance of failure. Although we must have sensed that persevering was not reasonable, the warning did not provide an immediately compelling reason to quit. After a few minutes of desultory debate, we gathered ourselves together and carried on as if nothing had happened. The book was eventually completed eight(!) years later.

It seems to me that if the person who discovered the planning fallacy is unable to make basic use of the planning fallacy when plotting out projects, a general sense that experts know what they're doing and are able to use their symbolic manipulation skills on their actual lives is dangerously misplaced. If it is a bad idea to publish things about decision theory in academia (because the costs outweigh the benefits, say) then it will only be bad decision-makers who publish on decision theory!

Comment author: alex_zag_al 23 February 2017 06:07:20AM 1 point [-]

Wow, I've read the story but I didn't quite realize the irony of it being a textbook (not a curriuculum, a textbook, right?) about judgment and decision making.

Comment author: alex_zag_al 01 May 2016 01:45:41AM 0 points [-]

The alternative I would propose, in this particular case, is to debate the general rule of banning physics experiments because you cannot be absolutely certain of the arguments that say they are safe.

Giving up on debating the probability of a particular proposition, and shifting to debating the merits of a particular rule, is I feel one of the ideas behind frequentist statistics. Like, I'm not going to say anything about whether the true mean is in my confidence interval in this particular case. But note that using this confidence interval formula works pretty well on average.

Comment author: alex_zag_al 23 February 2016 09:56:36PM 0 points [-]

I don't know about the role of this assumption in AI, which is what you seem to care most about. But I think I can answer about its role in philosophy.

One thing I want from epistemology is a model of ideally rational reasoning, under uncertainty. One way to eliminate a lot of candidates for such a model is to show that they make some kind of obvious mistake. In this case, the mistake is judging something as a good bet when really it is guaranteed to lose money.

View more: Next