[SEQ RERUN] Correspondence Bias

5 Tyrrell_McAllister 15 June 2011 09:26PM

Today's post, Correspondence Bias, was originally published on 25 June 2007. A summary (taken from the LW wiki):

Correspondence Bias is a tendency to attribute to a person a disposition to behave in a particular way, based on observing an episode in which that person behaves in that way. The data set that gets considered consists only of the observed episode, while the target model is of the person's behavior in general, in many possible episodes, in many different possible contexts that may influence the person's behavior.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, in which we're going through Eliezer Yudkowsky's old posts in order, so that people who are interested can (re-)read and discuss them. The previous post was Risk-Free Bonds Aren't, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] Risk-Free Bonds Aren't

4 Tyrrell_McAllister 14 June 2011 04:14PM

Today's post, Risk-Free Bonds Aren't, was originally published on 22 June 2007. A summary (taken from the LW wiki):

There are no risk-free investments. Even US treasury bills would fail under a number of plausible "black swan" scenarios. Nassim Taleb's own investment strategy doesn't seem to take sufficient account of such possibilities. Risk management is always a good idea.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, in which we're going through Eliezer Yudkowsky's old posts in order, so that people who are interested can (re-)read and discuss them. The previous post was http://lesswrong.com/lw/hx/one_life_against_the_world/, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] One Life Against the World

6 Tyrrell_McAllister 12 June 2011 01:38AM

Today's post, One Life Against the World, was originally published on 18 May 2007. A summary (taken from the LW wiki):

Saving one life and saving the whole world provide the same warm glow. But, however valuable a life is, the whole world is billions of times as valuable. The duty to save lives doesn't stop after the first saved life. Choosing to save one life when you could have saved two is as bad as murder.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Scope Insensitivity, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] Scope Insensitivity

6 Tyrrell_McAllister 11 June 2011 01:08AM

Today's post, Scope Insensitivity, was originally published on 14 May 2007. A summary (taken from the LW wiki):

The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Third Alternatives for Afterlife-ism, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] Priors as Mathematical Objects

3 Tyrrell_McAllister 29 May 2011 03:36PM

Today's post, Priors as Mathematical Objects, was originally published on 12 April 2007. A summary (taken from the LW wiki):

As a mathematical object, a Bayesian "prior" is a probability distribution over sequences of observations. That is, the prior assigns a probability to every possible sequence of observations. In principle, you could then use the prior to compute the probability of any event by summing the probabilities of all observation-sequences in which that event occurs. Formally, the prior is just a giant look-up table. However, an actual Bayesian reasoner wouldn't literally implement a giant look-up table. Nonetheless, the formal definition of a prior is sometimes convenient. For example, if you are uncertain about which distribution to use, you can just use a weighted sum of distributions, which directly gives another distribution.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Marginally Zero-Sum Efforts, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] Superstimuli and the Collapse of Western Civilization

7 Tyrrell_McAllister 12 May 2011 04:42PM

Today's post, Superstimuli and the Collapse of Western Civilization, was originally published on 16 March 2007. A summary (taken from the LW wiki):

As a side effect of evolution, super-stimuli exist, and as a result of economics, are getting and should continue to get worse.

(alternate summary:)

At least 3 people have died by playing online games non-stop. How is it that a game is so enticing that after 57 straight hours playing, a person would rather spend the next hour playing the game over sleeping or eating? A candy bar is superstimulus, it corresponds overwhelmingly well to the EEA healthy food characteristics of sugar and fat. If people enjoy these things, the market will respond to provide as much of it as possible, even if other considerations make it undesirable.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Blue or Green on Regulation?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] Blue or Green on Regulation?

7 Tyrrell_McAllister 10 May 2011 06:31PM

Today's post, Blue or Green on Regulation?, was originally published on 15 March 2007. A summary (taken from the LW wiki):

Both sides are often right in describing the terrible things that will happen if we take the other side's advice; the universe is "unfair", terrible things are going to happen regardless of what we do, and it's our job to trade off for the least bad outcome.

(alternate summary:)

In a rationalist community, it should not be necessary to talk in the usual circumlocutions when talking about empirical predictions. We should know that people think of arguments as soldiers and recognize the behavior in our selves. When you think about all the truth values around you come to see that much of what the Greens said about the downside of the Blue policy was true - that, left to the mercy of the free market, many people would be crushed by powers far beyond their understanding, nor would they deserve it. And imagine that most of what the Blues said about the downside of the Green policy was also true - that regulators were fallible humans with poor incentives, whacking on delicately balanced forces with a sledgehammer.

(alternate summary:)

Burch's law isn't a soldier-argument for regulation; estimating the appropriate level of regulation in each particular case is a superior third option.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Scales of Justice, the Notebook of Rationality, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] The Scales of Justice, the Notebook of Rationality

7 Tyrrell_McAllister 08 May 2011 05:43PM

Today's post, The Scales of Justice, the Notebook of Rationality, was originally published on 13 March 2007. Two summaries (taken from the LW wiki):

People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.

(alternate summary:)

In non-binary answer spaces, you can't add up pro and con arguments along one dimension without risk of getting important factual questions wrong.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Burch's Law, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] Burch's Law

5 Tyrrell_McAllister 06 May 2011 09:15PM

Today's post, Burch's Law, was originally published on 08 March 2007. A summary (taken from the LW wiki):

Just because your ethics require an action doesn't mean the universe will exempt you from the consequences. Manufactured cars kill an estimated 1.2 million people per year worldwide. (Roughly 2% of the annual planetary death rate.) Not everyone who dies in an automobile accident is someone who decided to drive a car. The tally of casualties includes pedestrians. It includes minor children who had to be pushed screaming into the car on the way to school. And yet we still manufacture automobiles, because, well, we're in a hurry. The point is that the consequences don't change no matter how good the ethical justification sounds.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Policy Debates Should Not Appear One-Sided, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] Policy Debates Should Not Appear One-Sided

4 Tyrrell_McAllister 03 May 2011 06:13PM

Today's post, Policy Debates Should Not Appear One-Sided, was originally published on 03 March 2007. A summary (taken from the LW wiki):

Robin Hanson proposed a "banned products shop" where things that the government ordinarily would ban are sold. Eliezer responded that this would probably cause at least one stupid and innocent person to die. He became surprised when people inferred from this remark that he was against Robin's idea. Policy questions are complex actions with many consequences. Thus they should only rarely appear one-sided to an objective observer. A person's intelligence is largely a product of circumstances they cannot control. Eliezer argues for cost-benefit analysis instead of traditional libertarian ideas of tough-mindedness (people who do stupid things deserve their consequences).

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was You Are Not Hiring the Top 1%, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

View more: Prev | Next