Comment author: [deleted] 26 March 2015 11:31:26PM 0 points [-]

You're right, I don't. And I do not really need it in this case.

What I need is a cost function C(e,n) - e is some event and n is the number of people being subjected to said event, i.e. everyone gets their own - where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to "torture for 50 years" and "dust specks" so this generally makes sense at all.

The reason why I would want to have such a cost function is because I believe that it should be more than infinitesimally worse for 3^^^^3 people to suffer than for 3^^^3 people to suffer. I don't think there should ever be a point where you can go "Meh, not much of a big deal, no matter how many more people suffer."

If however the number of possible distinct people should be finite - even after taking into account level II and level III multiverses - due to discreteness of space and discreteness of permitted physical constants, then yes, this is all null and void. But I currently have no particular reason to believe that there should be such a bound, while I do have reason to believe that permitted physical constants should be from a non-discrete set.

In response to comment by [deleted] on Torture vs. Dust Specks
Comment author: private_messaging 27 March 2015 12:22:10PM *  -1 points [-]

Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there's only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ).

I don't think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn't feel any stronger because there's more 'copies' of it running in perfect unison, it can't even tell the difference. It won't affect the subjective experience if the CPUs running the same computation are slightly physically different).

edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you'll never exceed it with dust specks.

Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it's going to keep growing without a limit, but that's simply not true.

Comment author: Kindly 26 March 2015 10:30:03PM 1 point [-]

It's not a continuum fallacy because I would accept "There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don't know the exact values of N and T" as an answer. If, on the other hand, the comparison goes the other way for any values of N and T, then you have to accept the transitive closure of those comparisons as well.

Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?

I'm not sure what you mean by this. I don't believe in linearity of suffering: that would be the claim that 2 people tortured for 1 day is the same as 1 person tortured for 2 days, and that's ridiculous. I believe in comparability of suffering, which is the claim that for some value of N, N people tortured for 1 day is worse than 1 person tortured for 2 days.

Regarding anaesthetics: I would prefer a memory inhibitor for a painful surgery to the absence of one, but I would still strongly prefer to feel less pain during the surgery even if I know I will not remember it one way or the other. Is this preference unusual?

Comment author: private_messaging 26 March 2015 11:01:46PM *  0 points [-]

don't know the exact values of N and T

For one thing N=1 T=1 trivially satisfies your condition...

I'm not sure what you mean by this.

I mean, suppose that you got yourself a function that takes in a description of what's going on in a region of spacetime, and it spits out a real number of how bad it is.

Now, that function can do all sorts of perfectly reasonable things that could make it asymptotic for large numbers of people, for example it could be counting distinct subjective experiences in there (otherwise a mind upload on very multiple redundant hardware is an utility monster, despite having an identical subjective experience to same upload running one time. That's much sillier than the usual utility monster, which feels much stronger feelings). This would impose a finite limit (for brains of finite complexity).

One thing that function can't do, is to have a general property that f(a union b)=f(a)+f(b) , because then we just subdivide our space into individual atoms none of which are feeling anything.

In response to comment by [deleted] on Torture vs. Dust Specks
Comment author: [deleted] 25 March 2015 09:17:50PM 2 points [-]

It's not (necessarily) about dust specks accidentally leading to major accidents. But if you think that having a dust speck in your eye may be even slightly annoying (whether you consciously know that or not), the cost you have from having it fly into your eye is not zero.

Now something not zero multiplied by a sufficiently large number will necessarily be larger than the cost of one human being's life in torture.

In response to comment by [deleted] on Torture vs. Dust Specks
Comment author: private_messaging 26 March 2015 10:28:25PM *  0 points [-]

Now, do you have any actual argument as to why the 'badness' function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ?

I don't think you do. This is why this stuff strikes me as pseudomath. You don't even state your premises let alone justify them.

Comment author: Kindly 26 March 2015 01:44:36PM 3 points [-]

We could go from a day to a minute more slowly; for example, by increasing the number of people by a factor of a googolplex every time the torture time decreases by 1 second.

I absolutely agree that the length of torture increases how bad it is in nonlinear ways, but this doesn't mean we can't find exponential factors that dominate it at every point at least along the "less than 50 years" range.

Comment author: private_messaging 26 March 2015 08:21:38PM *  1 point [-]

That strikes me as a deliberate set up for a continuum fallacy.

Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?

I'd much prefer to have a [large number of exact copies of me] experience 1 second of headache than for one me to suffer it for a whole day. Because those copies they don't have any mechanism which could compound their suffering. They aren't even different subjectivities. I don't see any reason why a hypothetical mind upload of me running on multiple redundant hardware should be an utility monster, if it can't even tell subjectively how redundant it's hardware is.

Some anaesthetics do something similar, preventing any new long term memories, people have no problem with taking those for surgery. Something's still experiencing pain but it's not compounding into anything really bad (unless the drugs fail to work, or unless some form of long term memory still works). A real example of a very strong preference for N independent experiences of 30 seconds of pain over 1 experience of 30*N seconds of pain.

In response to comment by [deleted] on Torture vs. Dust Specks
Comment author: Roho 25 March 2015 02:34:39PM *  3 points [-]

Okeymaker, I think the argument is this:

Torturing one person for 50 years is better than torturing 10 persons for 40 years.

Torturing 10 persons for 40 years is better than torturing 1000 persons for 10 year.

Torturing 1000 persons for 10 years is better than torturing 1000000 persons for 1 year.

Torturing 10^6 persons for 1 year is better than torturing 10^9 persons for 1 month.

Torturing 10^9 persons for 1 month is better than torturing 10^12 persons for 1 week.

Torturing 10^12 persons for 1 week is better than torturing 10^15 persons for 1 day.

Torturing 10^15 persons for 1 day is better than torturing 10^18 persons for 1 hour.

Torturing 10^18 persons for 1 hour is better than torturing 10^21 persons for 1 minute.

Torturing 10^21 persons for 1 minute is better than torturing 10^30 persons for 1 second.

Torturing 10^30 persons for 1 second is better than torturing 10^100 persons for 1 millisecond.

Torturing for 1 millisecond is exactly what a dust speck does.

And if you disagree with the numbers, you can add a few millions. There is still plenty of space between 10^100 and 3^^^3.

Comment author: private_messaging 26 March 2015 08:05:19AM *  4 points [-]

Torturing a person for 1 millisecond is not necessarily even a possibility. It doesn't make any sense whatsoever; in 1 millisecond no interesting feedback loops can even close.

If we accept that torture is some class of computational processes that we wish to avoid, the badness definitely could be eating up your 3^^^3s in one way or the other. We have absolutely zero reason to expect linearity when some (however unknown) properties of a set of computations are involvd. And the computational processes are not infinitely divisible into smaller lengths of time.

In response to comment by [deleted] on Torture vs. Dust Specks
Comment author: Quill_McGee 25 March 2015 09:43:26PM 4 points [-]

In the Least Convenient Possible World of this hypothetical, every dust speck causes a constant small amount of harm with no knock-on effects(no avoiding buses, no crashing cars...)

Comment author: private_messaging 26 March 2015 08:02:14AM *  0 points [-]

I thought the original point was to focus just on the inconvenience of the dust, rather than simply propositioning that out of 3^^^3 people who were dustspecked, one person would've gotten something worse than 50 years of torture as a consequence of the dust speck. The latter is not even an ethical dilemma, it's merely an (entirely baseless but somewhat plausible) assertion about the consequences of dust specks in the eyes.

Comment author: gwern 25 February 2015 05:29:02PM 0 points [-]

For example, evolving smaller animals from larger animals (by a given factor) is an order of magnitude faster process than evolving larger animals from smaller animals. ( http://news.ucsc.edu/2012/01/body-size.html ). I think you wouldn't disagree that it would be far quicker to breed a 50 point IQ drop than 50 point IQ rise?

But what does that have to do with breeding for our objective purpose? It may be easier to destroy functionality than create it, but evolution is creating functionality for living in the wild and doing something like hunting mice while we're interesting in creating functionality to do something like understand human social cues and trading off against things like aggression and hostility towards the unknown. In both cases, functionality is being created and trading off against something else, and there's no reason to expect the change for one case to be beneficial for the other. Border collies may be geniuses at memorizing words and herding sheep and both of these feats required intense selection, but both skills are worse than useless for surviving in the wild as a wolf...

I guess you refer to those studies on intelligence genes which flood the popular media, which tend to have small effect sizes and are of exactly the kind that is very prone to superfluous results.

The original studies, yes, the ones like candidate-gene studies where n rarely is more than a few hundred, but the ones using proper sample sizes like n>50000 and genome-wide significance level seem trustworthy to me. They seem to be replicating.

Comment author: private_messaging 20 March 2015 11:18:06PM *  0 points [-]

Well, my point was that you can't expect the same rate of advances from some IQ breeding programme that we get when breeding traits arising via loss-of-function mutations.

They seem to be replicating.

They don't seem to be replicating very well...

http://arstechnica.com/science/2014/09/researchers-search-for-genes-behind-intelligence-find-almost-nothing/

Sure, there's a huge genetic component, but almost none of it is "easily identified".

Generally you can expect that parameters such as e.g. initial receptor density at a specific kind of synapse would be influenced by multiple genes and have an optimum, where either higher or lower value is sub-optimal. So you can easily get one of the shapes from the bottom row in

http://en.wikipedia.org/wiki/Correlation_and_dependence#/media/File:Correlation_examples2.svg

i.e. little or no correlation between IQ and that parameter (and little or no correlation between IQ and any one of the many genes influencing said parameter).

edit: that is to say, for example if we have an allele which slightly increases number of receptors on a synapse between some neuron type A and some neuron type B, that can either increase or decrease the intelligence depending on whenever the activation of Bs by As would be too low or too high otherwise (as determined by all the other genes). So this allele affects intelligence, sure, but not in a simple easy to detect way.

Comment author: gwern 25 February 2015 05:34:06PM *  1 point [-]

The crazier things in scientology are also believed in by only a small fraction of the followers, yet they're a big deal in so much that this small fraction is people who run that religion.

The crazier things are only exposed to a small fraction, which undermines the point. Do the Scientologists not believe in Xenu because they've seen all the Scientology teachings and reject them, or because they've been diligent and have never heard of them? If they've never heard of Xenu, their lack of belief says little about whether the laity differs from the priesthood... In contrast, everyone's heard of and rejected the Basilisk, and it's not clear that there's any fraction of 'people who run LW' which believes in the Basilisk comparable to the fraction of 'people who run that religion' who believe in Xenu. (At this point, isn't it literally exactly one person, Eliezer?)

edit: Nobody's making a claim that visitors to a scientology website believe in xenu

Visiting a Scientology website is not like taking the LW annual poll as I've suggested, and if there were somehow a lot of random visitors to LW taking the poll, they can be easily cut out by using the questions about time on site / duration of community involvement / karma. So the poll would still be very useful for demonstrating that the Basilisk is a highly non-central and peripheral topic.

Comment author: private_messaging 20 March 2015 10:39:09PM *  0 points [-]

Well, mostly everyone heard of Xenu, for some value of "heard of", so I'm not sure what's your point.

So the poll would still be very useful for demonstrating that the Basilisk is a highly non-central and peripheral topic.

Yeah. So far, though, it is so highly non central and so peripheral that you can't even add a poll question about it.

edit:

(At this point, isn't it literally exactly one person, Eliezer?)

Roko, someone claimed to have had nightmares about it... who knows if they still believe, and whoever else believes? Scientology is far older (and far bigger), there been a lot of insider leaks which is where we know the juicy stuff from.

As for how many people believe in "Basilisk", given various "hint hint there's a much more valid version out there but I won't tell it to you" type statements and repeat objections along the lines of "that's not a fair description of the Basilisk, it makes a lot more sense than you make it out to be", it's a bit slippery with regards to what we mean by Basilisk.

Comment author: PhilGoetz 09 February 2015 08:59:56PM *  0 points [-]

The actual reality does not have high level objects such as nematodes or humans.

Um... yes, it does. "Reality" doesn't conceptualize of them, but I, the agent analyzing the situation, do. I will have some function that looks at the underlying reality and partitions it into objects, and some other function that computes utility over those objects. These functions could be composed to give one big function from physics to utility. But that would be, epistemologically, backwards.

Before one could even consider an utility of a human (or a nematode) 's existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it's value, and so on.

No. Utility is a thing agents have. "Utility theory" is a thing you use to compute an agent's desired action; it is therefore a thing that only intelligent agents have. Space doesn't have utility. To quote (perhaps unfortunately) Žižek, space is literally the stupidest thing there is.

Comment author: private_messaging 14 February 2015 11:50:45PM *  1 point [-]

Before one could even consider an utility of a human (or a nematode) 's existence

No. Utility is a thing agents have.

'one' in that case refers to an agent who's trying to value feelings that physical systems have.

I think there's some linguistic confusion here. As an agent valuing that there's no enormous torture camp set up in a region of space, I'd need to have an utility function over space, which gives the utility of that space.

Comment author: Nornagest 03 February 2015 04:25:56AM *  0 points [-]

Okay, they joke about it. Just not the kind of joke that draws attention to the thing they're worried about; it'd be too close to home, like making a dead baby joke at a funeral. Jokes minimizing or exaggerating the situation -- a type of deflection -- are more likely; Kool-Aid jokes wouldn't be out of the question, for example.

Why the ellipsis?

Comment author: private_messaging 03 February 2015 05:11:18AM *  0 points [-]

Well, presumably one who's joining a doomsday cult is most worried about the doomsday (and would be relieved if it was just a bullshit doomsday cult). So wouldn't that be a case of jokes minimizing the situation as it exists in the speaker's mind? The reason that NORAD joke of yours is funny to either of us, is that we both believe it can actually cause an extreme catastrophe, which is uncomfortable for us. Why wouldn't a similar joke referencing a false doomsday not be funny to one who believes in said false belief as strongly as we believe in nuclear weapons?

Why the ellipsis?

To indicate that a part was omitted.

View more: Prev | Next