SimonJester23

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I'm going to agree with the observation that "make food production more efficient by making only one type of food" isn't a likely winner for restaurants.

If you're really trying to optimize for the economic efficiency with which food is produced, you don't make hot fresh-cooked food in the first place; you make food on assembly lines and package it. At which point the incremental cost of using preservation techniques and selling it on supermarket shelves is minimal- and all you're sacrificing is flavor and the health of the food... and people who are trying to optimize for 'cheapness' in their food tend to not care about that.

This is not a likable conclusion, perhaps. But it's definitely the one supported by the evidence of what market economies with plenty of access to information for all parties actually DO.

Now... yes, in an imaginary world where handwavium drone-robots make it possible to deliver anything you want for free and never mind the logistical implausibilities (i.e. that of Yudkowsky's "dath ilan"), a situation where you order three different foods from three different vendors who all specialize in that exact food might work.*

In a world where you have to go TO the location of your food, or where there is ANY significant extra cost associated with making three smaller transactions over one big one, it's a non-starter.


*Although even then you still need room for customization- a pizza place that literally refuses to make pizzas with more than one topping combination will usually lose out to a pizza place that lets you pick your toppings.

Speculatively, people who have never suffered a serious setback at the hands of others may be biased in favor of thinking they are the only ones who exercise control over their own outcomes. That might well explain why they self-report as happier- they're less likely to be inconvenienced!

It sounds like a virtually impossible task to disentangle internal and external factors to see which plays a bigger role, honestly. Any given incident may have causes that come from both camps, and any given event can easily be interpreted either way. As in, were you turned down at the job interview because the HR guy dislikes candidates wearing blue ties, or because you exercised a poor fashion sense decision and wore a blue tie the HR guy didn't like?

The latter attitude of internalizing everything sounds suspiciously like treating everyone else as an NPC...

Does anyone know if chronic narcissists or psychopaths are on average likely to self-report as happier than regular people?

It occurred to me to add something to my previous comments about the idea of harm being nonlinear, or something that we compute in multiple dimensions that are not commensurate.

One is that any deontological system of ethics automatically has at least two dimensions. One for general-purpose "utilons," and one for... call them "red flags." As soon as you accumulate even one red flag you are doing something capital-w Wrong in that system of ethics, regardless of the number of utilons you've accumulated.

The main argument justifying this is, of course, that you may think you have found a clever way to accumulate 3^^^3 utilons in exchange for a trivial amount of harm (torture ONLY one scapegoat!)... but the overall weighted average of all human moral reasoning suggests that people who think they've done this are usually wrong. Therefore, best to red-flag such methods, because they usually only sound clever.

Obviously, one may need to take this argument with a grain of salt, or 3^^^3 grains of salt. It depends on how strongly you feel bound to honor conclusions drawn by looking at the weighted average of past human decision-making.


The other observation that occurred to me is unrelated. It is about the idea of harm being nonlinear, which as I noted above is just plain not enough to invalidate the torture/specks argument by itself due to the ability to keep thwacking a nonlinear relationship with bigger numbers until it collapses.

Take as a thought-experiment an alternate Earth where, in the year 1000, population growth has stabilized at an equilibrium level, and will rise back to that equilibrium level in response to sudden population decrease. The equilibrium level is assumed to be stable in and of itself.

Imagine aliens arriving and killing 50% of all humans, chosen apparently at random. Then they wait until the population has returned to equilibrium (say, 150 years) and do it again. Then they repeat the process twice more.

The world population circa 1000 was about 300 million (roughly,) so we estimate that this process would kill 600 million people.

Now consider as an alternative, said aliens simply killing everyone, all at once. 300 million dead.

Which outcome is worse?

If harm is strictly linear, we would expect that one death plus one death is exactly as bad as two deaths. By the same logic, 300 megadeaths is only half as bad as 600 megadeaths, and if we inoculate ourselves against hyperbolic discounting...

Well, the "linear harm" theory smacks into a wall. Because it is very credible to claim that the extinction of the human species is much worse than merely twice as bad as the extinction of exactly half the human species. Many arguments can be presented, and no doubt have been presented on this very site. The first that comes to mind is that human extinction means the loss of all potential future value associated with humans, not just the loss of present value, or even the loss of some portion of the potential future.

We are forced to conclude that there is a "total extinction" term in our calculation of harm, one that rises very rapidly in an 'inflationary' way. And it would do this as the destruction wrought upon humanity reaches and passes a level beyond which the species could not recover- the aliens killing all humans except one is not noticeably better than killing all of them, nor is sparing any population less than a complete breeding population, but once a breeding population is spared, there is a fairly sudden drop in the total quantity of harm.

Now, again, in itself this does not strictly invalidate the Torture/Specks argument. Assuming that the harm associated with human extinction (or torturing one person) is any finite amount that could conceivably be equalled by adding up a finite number of specks in eyes, then by definition there is some "big enough" number of specks that the aliens would rationally decide to wipe out humanity rather than accept that many specks in that many eyes.

But I can't recall a similar argument for nonlinear harm measurement being presented in any of the comments I've sampled, so I wanted to mention it.

But I thought it was interesting and couldn't recall seeing it elsewhere.

There's the question of linearity- but if you use big enough numbers you can brute force any nonlinear relationship, as Yudkowsky correctly pointed out some years ago. Take Kindly's statement:

"There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don't know the exact values of N and T"

We can imagine a world where this statement is true (probably for a value of T really close to 1). And we can imagine knowing the correct values of N and T in that world. But even then, if a critical condition is met, it will be true that

"For all values of N, and for all T>1, there exists a value of A such that torturing N people for T seconds is better than torturing A*N people for T-1 seconds."

Sure, the value of A may be larger than 10^100... But then, 3^^^3 is already vastly larger than 10^100. And if it weren't big enough we could just throw a bigger number at the problem; there is no upper bound on the size of conceivable real numbers. So if we grant the critical condition in question, as Yudkowsky does/did in the original post...

Well, you basically have to concede that "torture" wins the argument, because even if you say that [hugenumber] of dust specks does not equate to a half-century of torture, that is NOT you winning the argument. That is just you trying to bid up the price of half a century of torture.

The critical condition that must be met here is simple, and is an underlying assumption of Yudkowsky's original post: All forms of suffering and inconvenience are represented by some real number quantity, with commensurate units to all other forms of suffering and inconvenience.

In other words, the "torture one person rather than allow 3^^^3 dust specks" wins, quite predictably, if and only if it is true that that the 'pain' component of the utility function is measured in one and only one dimension.

So the question is, basically, do you measure your utility function in terms of a single input variable?

If you do, then either you bury your head in the sand and develop a severe case of scope insensitivity... or you conclude that there has to be some number of dust specks worse than a single lifetime of torture.

If you don't, it raises a large complex of additional questions- but so far as I know, there may well be space to construct coherent, rational systems of ethics in that realm of ideas.