Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: pure-awesome 04 May 2013 11:21:48PM 2 points [-]

The problem is not that they come up with a hypothesis too early, it's that they stop too early without testing examples that are not supposed to work. In most cases people are given as many opportunities to test as they'd like, yet they are confident in their answer after only testing one or two cases (all of which came up positive).

The trick is that you should come up with one or more hypotheses as soon as you can (maybe without announcing them), but test both cases which do and don't confirm it, and be prepared to change your hypothesis if you are proven wrong.

Comment author: AndyC 09 November 2017 10:53:18AM *  0 points [-]

If it requires a round-trip of human speech through a professor (and thus the requisition of the attention of the entire class) then you can hardly say they are given as many opportunities to test as they'd like. A person of functioning social intelligence certainly has no more than 20 such round-trips available consecutively, and less conservatively even 4 might be pushing it for many.

Give them a computer program to interact with and <i>then</i> you can say they have as many opportunities to test as they'd like.

Comment author: phob 26 July 2010 10:56:00PM 3 points [-]

Utilitarianism to the rescue, then.

Comment author: AndyC 23 April 2014 01:16:27AM *  2 points [-]

Utilitarianism is unlikely to rescue anyone from the conundrum (unless it's applied in the most mindless way -- in which case, you might as well not think about it).

There's an obvious social benefit to being secure against being randomly sacrificed for the benefit of others. You're not going to be able to quantify the utility of providing everyone in society this benefit as a general social principle, and weigh the benefit of consistency on that point against the benefit of violating the principle in any given instance, any more easily than you could have decided the issue without any attempt at quantification.

Comment author: diegocaleiro 04 February 2013 04:32:14AM 2 points [-]

It is controversial that one must signal high status from the White House.

Comment author: AndyC 22 April 2014 05:37:08PM 0 points [-]

USA Presidents routinely try to signal lower class than they have.

Comment author: knb 11 July 2012 03:06:17PM *  10 points [-]

Using the term over-simplified was my attempt at generosity. As presently stated, your claim is entirely wrong. Intelligence is the single best predictor of job performance for all but the most narrowly-focused manual tasks, see for example Ree & Earles, Current Directions in Psychological Science vol. 1, No. 3 (Jun., 1992), pp. 86-89.

The strong claim you made in your original comment was entirely false, and I get the impression you were just speculating wildly about something you don't actually know much about.

Comment author: AndyC 22 April 2014 05:30:05PM *  2 points [-]

It's important to note that employers are not seeking to maximize employee performance. They're seeking to maximize the difference between the value provided by the employee and the wage provided to the employee.

In response to Circular Altruism
Comment author: mitchell_porter2 23 January 2008 10:56:35AM 3 points [-]

As was pointed out last time, if you insist that no quantity of dust-specks-in-individual-eyes is comparable to one instance of torture, then what is your boundary case? What about 'half-torture', 'quarter-torture', 'millionth-torture'? Once you posit a qualitative distinction between the badness of different classes of experience, such that no quantity of experiences in one class can possibly be worse than a single experience in the other class, then you have posited the existence of a sharp dividing line on what appears to be a continuum of possible individual experiences.

But if we adopt the converse position, and assume that all experiences are commensurable and additive aggregation of utility makes sense without exception - then we are saying that there is an exact quantity which measures precisely how much worse an instance of torture is than an instance of eye irritation. This is obscured by the original example, in which an inconceivably large number is employed to make the point that if you accept additive aggregation of utilities as a universal principle, then there must come a point when the specks are worse than the torture. But there must be a boundary case here as well: some number N such that, if there are more than N specks-in-eyes, it's worse than the torture, but if there are N or less, the torture wins out.

Can any advocates of additive aggregation of utility defend a particular value for N? Because if not, you're in the same boat with the incommensurabilists, unable to justify their magic dividing line.

Comment author: AndyC 22 April 2014 12:50:56PM -2 points [-]

I'm not unable to justify the "magic dividing line."

The world with the torture gives 3^^^3 people the opportunity to lead a full, thriving life.

The world with the specs gives 3^^^3+1 people the opportunity to lead a full, thriving life.

The second one is better.

In response to comment by Yvain on Circular Altruism
Comment author: drnickbone 21 November 2012 08:33:52AM *  1 point [-]

(the theory would also imply that an infinite amount of irritation is not as bad as a tiny amount of pain, which doesn't seem to be true)

Hmm not sure. It seems quite plausible to me that for any n, an instance of real harm to one person is worse than n instances of completely harmless irritation to n people. Especially if we consider a bounded utility function; the n instances of irritation have to flatten out at some finite level of disutility, and there is no a priori reason to exclude torture to one person having a worse disutility than that asymptote.

Having said all that, I'm not sure I buy into the concept of completely harmless irritation. I doubt we'd perceive a dust speck as a disutility at all except for the fact that it has small probability of causing big harm (loss of life or offspring) somewhere down the line. A difficulty with the whole problem is the stipulation that the dust specks do nothing except cause slight irritation... no major harm results to any individual. However, throwing a dust speck in someone's eye would in practice have a very small probability of very real harm, such as distraction while operating dangerous machinery (driving, flying etc), starting an eye infection which leads to months of agony and loss of sight, a slight shock causing a stumble and broken limbs or leading to a bigger shock and heart attack. Even the very mild irritation may be enough to send an irritable person "over the edge" into punching a neighbour, or a gun rampage, or a borderline suicidal person into suicide. All these are spectacularly unlikely for each individual, but if you multiply by 3^^^3 people you still get order 3^^^3 instances of major harm.

Comment author: AndyC 22 April 2014 12:34:54PM *  2 points [-]

With that many instances, it's even highly likely that at least one of the specs in the eye will offer a rare opportunity for some poor prisoner to escape his captors, who had intended to subject him to 50 years of torture.

In response to comment by Hul-Gil on Circular Altruism
Comment author: Yvain 20 November 2012 10:43:48PM *  6 points [-]

Thank you for trying to address this problem, as it's important and still bothers me.

But I don't find your idea of two different scales convincing. Consider electric shocks. We start with an imperceptibly low voltage and turn up the dial until the first level at which the victim is able to perceive slight discomfort (let's say one volt). Suppose we survey people and find that a one volt shock is about as unpleasant as a dust speck in the eye, and most people are indifferent between them.

Then we turn the dial up further, and by some level, let's say two hundred volts, the victim is in excruciating pain. We can survey people and find that a two hundred volt shock is equivalent to whatever kind of torture was being used in the original problem.

So one volt is equivalent to a dust speck (and so on the "trivial scale"), but two hundred volts is equivalent to torture (and so on the "nontrivial scale"). But this implies either that triviality exists only in degree (which ruins the entire argument, since enough triviality aggregated equals nontriviality) or that there must be a sharp discontinuity somewhere (eg a 21.32 volt shock is trivial, but a 21.33 volt shock is nontrivial). But the latter is absurd. Therefore there should not be separate trivial and nontrivial utility scales.

In response to comment by Yvain on Circular Altruism
Comment author: AndyC 22 April 2014 12:25:49PM *  3 points [-]

First of all, you might benefit from looking up the beard fallacy.

To address the issue at hand directly, though:

Of course there are sharp discontinuities. Not just one sharp discontinuity, but countless. However, there is not particular voltage at which there is a discontinuity. Rather, increasing the voltage increases the probability of a discontinuity.

I will list a few discontinuities established by torture.

  1. Nightmares. As the electrocution experience becomes more severe, the probability that it will result in a nightmare increases. After 50 years of high voltage, hundreds or even thousands of such nightmares are likely to have occurred. However, 1 second of 1V is unlikely to result in even a single nightmare. The first nightmare is a sharp discontinuity. But furthermore, each additional nightmare is another sharp discontinuity.

  2. Stress responses to associational triggers. The first such stress response is a sharp discontinuity, but so is every other one. But please note that there is a discontinuity for each instance of stress response that follows in your life: each one is its own discontinuity. So, if you will experience 10,500 stress responses, that is 10,500 discontinuities. It's impossible to say beforehand what voltage or how many seconds will make the difference between 10,499 and 10,500, but in theory a probability could be assigned. I think there are already actual studies that have measured the increased stress response after electroshock, over short periods.

  3. Flashbacks. Again, the first flashback is a discontinuity; as is every other flashback. Every time you start crying during a flashback is another discontinuity.

  4. Social problems. The first relationship that fails (e.g., first woman that leaves you) because of the social ramifications of damage to your psyche is a discontinuity. Every time you flee from a social event: another discontinuity. Every fight that you have with your parents as a result of your torture (and the fact that you have become unrecognizable to them) is a discontinuity. Every time you fail to make eye contact is a discontinuity. If not for the torture, you would have made the eye contact, and every failure represents a forked path in your entire future social life.

I could go on, but you can look up the symptoms of PTSD yourself. I hope, however, that I have impressed upon you the fact that life constitutes a series of discrete events, not a continuous plane of quantifiable and summable utility lines. It's "sharp discontinuities" all the way down to elementary particles. Be careful with mathematical models involving a continuum.

Please note that flashbacks, nightmares, stress responses to triggers, and social problems do not result from specs of dust in the eye.

In response to comment by Hul-Gil on Circular Altruism
Comment author: OnTheOtherHandle 20 August 2012 12:57:02AM *  -1 points [-]

Another thing that seems to be a factor, at least for me, is that there's a term in my utility function for "fairness," which usually translates to something roughly similar to "sharing of burdens." (I also have a term for "freedom," which is in conflict with fairness but is on the same scale and can be traded off against it.)

Why wouldn't this be a situation in which "the complexity of human value" comes into play? Why is it wrong to think something along the lines of, "I would be willing to make everyone a tiny bit worse off so that no one person has to suffer obscenely"? It's the rationale behind taxation, and while it's up for debate many Less Wrongers support moderate taxation if it would help a few people a lot while hurting a bunch of people a little bit.

Think about it: the exact number of dollars taken from people in taxes don't go directly toward feeding the hungry. Some of it gets eaten up in bureaucratic inefficiencies, some of it goes to bribery and embezzlement, some of it goes to the military. This means if you taxed 1,000,000 well-off people $1 each, but only ended up giving 100 hungry people $1000 each to stave of a painful death from starvation, we as utilitarians would be absolutely, 100% obligated to oppose this taxation system, not because it's inefficient, but because doing nothing would be better. There is to be no room for debate; it's $100,000 - $1,000,000 = net loss; let the 100 starving peasants die.

Note that you may be a libertarian and oppose taxation on other grounds, but most libertarians wouldn't say you are literally doing morality wrong if you think it's better to take $1 each from a million people, even if only $100,000 of it gets used to help the poor.

I could easily be finding ways to rationalize my own faulty intuitions - but I managed to change my mind about Newcomb's problem and about the first example given in the above post despite powerful initial intuitions, and I managed to work the latter out for myself. So I think, if I'm expected to change my mind here, I'm justified in holding out for an explanation or formulation that clicks with me.

Comment author: AndyC 22 April 2014 12:07:25PM *  2 points [-]

That makes no sense. Just because one thing cost $1, and another thing cost $1000, does not mean that the first thing happening 1001 times is better than the second one happening once.

Preferences logically precede prices. If they didn't, nobody would be able to decide what they were willing to spend on anything in the first place. If utilitarianism requires that you decide the value of things based on their prices, then utilitarians are conformists without values of their own, who derive all of their value judgments from non-utilitarian market participants who actually have values.

(Besides, money that is spent on "overhead" does not magically disappear from the economy. Someone is still being paid to do something with that money, who in turn buys things with the money, and so on. And even if the money does disappear -- say, dollar bills are burnt in a furnace -- it still would not represent a loss of productive capacity in the economy. Taxing money and then completely destroying the money (shrinking the money supply) is sound monetary policy, and it occurs on a regular (cyclical) basis. Your whole argument here is a complete non-starter.)

Comment author: phob 04 January 2011 05:53:47PM 9 points [-]

Would you pay one cent to prevent one googleplex of people from having a momentary eye irration?

Torture can be put on a money scale as well: many many countries use torture in war, but we don't spend huge amounts of money publicizing and shaming these people (which would reduce the amount of torture in the world).

In order to maximize the benefit of spending money, you must weigh sacred against unsacred.

In response to comment by phob on Circular Altruism
Comment author: AndyC 22 April 2014 11:52:29AM 2 points [-]

There's an interesting paper on microtransactions and how human rationality can't really handle decisions about values under a certain amount. The cognitive effort of making a decision outweighs the possible benefits of making the decision.

How much time would you spend making a decision about how to spend a penny? You can't make a decision in zero time, it's not physically possible. Rationally you have to round off the penny, and the spec of dust.

Comment author: Kingreaper 22 July 2010 12:05:39AM 13 points [-]

A dust speck takes a finite time, not an instant. Unless I'm misunderstanding you, this makes them lines, not points.

Comment author: AndyC 22 April 2014 11:36:40AM *  0 points [-]

You're misunderstanding. It has nothing to do with time -- it's not a time line. It means the dust motes are infinitesimal, while the torture is finite. A finite sum of infinitesimals is always infinitesimal.

Not that you really need to use a math analogy here. The point is just that there is a qualitative difference between specs of dust and torture. They're incommensurable. You cannot divide torture by spec of dust, because neither one is a number to start with.

View more: Next