In response to comment by [deleted] on Torture vs. Dust Specks
Comment author: Roho 25 March 2015 02:34:39PM *  3 points [-]

Okeymaker, I think the argument is this:

Torturing one person for 50 years is better than torturing 10 persons for 40 years.

Torturing 10 persons for 40 years is better than torturing 1000 persons for 10 year.

Torturing 1000 persons for 10 years is better than torturing 1000000 persons for 1 year.

Torturing 10^6 persons for 1 year is better than torturing 10^9 persons for 1 month.

Torturing 10^9 persons for 1 month is better than torturing 10^12 persons for 1 week.

Torturing 10^12 persons for 1 week is better than torturing 10^15 persons for 1 day.

Torturing 10^15 persons for 1 day is better than torturing 10^18 persons for 1 hour.

Torturing 10^18 persons for 1 hour is better than torturing 10^21 persons for 1 minute.

Torturing 10^21 persons for 1 minute is better than torturing 10^30 persons for 1 second.

Torturing 10^30 persons for 1 second is better than torturing 10^100 persons for 1 millisecond.

Torturing for 1 millisecond is exactly what a dust speck does.

And if you disagree with the numbers, you can add a few millions. There is still plenty of space between 10^100 and 3^^^3.

Comment author: private_messaging 26 March 2015 08:05:19AM *  4 points [-]

Torturing a person for 1 millisecond is not necessarily even a possibility. It doesn't make any sense whatsoever; in 1 millisecond no interesting feedback loops can even close.

If we accept that torture is some class of computational processes that we wish to avoid, the badness definitely could be eating up your 3^^^3s in one way or the other. We have absolutely zero reason to expect linearity when some (however unknown) properties of a set of computations are involvd. And the computational processes are not infinitely divisible into smaller lengths of time.

In response to comment by [deleted] on Torture vs. Dust Specks
Comment author: Quill_McGee 25 March 2015 09:43:26PM 4 points [-]

In the Least Convenient Possible World of this hypothetical, every dust speck causes a constant small amount of harm with no knock-on effects(no avoiding buses, no crashing cars...)

Comment author: private_messaging 26 March 2015 08:02:14AM *  0 points [-]

I thought the original point was to focus just on the inconvenience of the dust, rather than simply propositioning that out of 3^^^3 people who were dustspecked, one person would've gotten something worse than 50 years of torture as a consequence of the dust speck. The latter is not even an ethical dilemma, it's merely an (entirely baseless but somewhat plausible) assertion about the consequences of dust specks in the eyes.

Comment author: gwern 25 February 2015 05:29:02PM 0 points [-]

For example, evolving smaller animals from larger animals (by a given factor) is an order of magnitude faster process than evolving larger animals from smaller animals. ( http://news.ucsc.edu/2012/01/body-size.html ). I think you wouldn't disagree that it would be far quicker to breed a 50 point IQ drop than 50 point IQ rise?

But what does that have to do with breeding for our objective purpose? It may be easier to destroy functionality than create it, but evolution is creating functionality for living in the wild and doing something like hunting mice while we're interesting in creating functionality to do something like understand human social cues and trading off against things like aggression and hostility towards the unknown. In both cases, functionality is being created and trading off against something else, and there's no reason to expect the change for one case to be beneficial for the other. Border collies may be geniuses at memorizing words and herding sheep and both of these feats required intense selection, but both skills are worse than useless for surviving in the wild as a wolf...

I guess you refer to those studies on intelligence genes which flood the popular media, which tend to have small effect sizes and are of exactly the kind that is very prone to superfluous results.

The original studies, yes, the ones like candidate-gene studies where n rarely is more than a few hundred, but the ones using proper sample sizes like n>50000 and genome-wide significance level seem trustworthy to me. They seem to be replicating.

Comment author: private_messaging 20 March 2015 11:18:06PM *  0 points [-]

Well, my point was that you can't expect the same rate of advances from some IQ breeding programme that we get when breeding traits arising via loss-of-function mutations.

They seem to be replicating.

They don't seem to be replicating very well...

http://arstechnica.com/science/2014/09/researchers-search-for-genes-behind-intelligence-find-almost-nothing/

Sure, there's a huge genetic component, but almost none of it is "easily identified".

Generally you can expect that parameters such as e.g. initial receptor density at a specific kind of synapse would be influenced by multiple genes and have an optimum, where either higher or lower value is sub-optimal. So you can easily get one of the shapes from the bottom row in

http://en.wikipedia.org/wiki/Correlation_and_dependence#/media/File:Correlation_examples2.svg

i.e. little or no correlation between IQ and that parameter (and little or no correlation between IQ and any one of the many genes influencing said parameter).

edit: that is to say, for example if we have an allele which slightly increases number of receptors on a synapse between some neuron type A and some neuron type B, that can either increase or decrease the intelligence depending on whenever the activation of Bs by As would be too low or too high otherwise (as determined by all the other genes). So this allele affects intelligence, sure, but not in a simple easy to detect way.

Comment author: gwern 25 February 2015 05:34:06PM *  1 point [-]

The crazier things in scientology are also believed in by only a small fraction of the followers, yet they're a big deal in so much that this small fraction is people who run that religion.

The crazier things are only exposed to a small fraction, which undermines the point. Do the Scientologists not believe in Xenu because they've seen all the Scientology teachings and reject them, or because they've been diligent and have never heard of them? If they've never heard of Xenu, their lack of belief says little about whether the laity differs from the priesthood... In contrast, everyone's heard of and rejected the Basilisk, and it's not clear that there's any fraction of 'people who run LW' which believes in the Basilisk comparable to the fraction of 'people who run that religion' who believe in Xenu. (At this point, isn't it literally exactly one person, Eliezer?)

edit: Nobody's making a claim that visitors to a scientology website believe in xenu

Visiting a Scientology website is not like taking the LW annual poll as I've suggested, and if there were somehow a lot of random visitors to LW taking the poll, they can be easily cut out by using the questions about time on site / duration of community involvement / karma. So the poll would still be very useful for demonstrating that the Basilisk is a highly non-central and peripheral topic.

Comment author: private_messaging 20 March 2015 10:39:09PM *  0 points [-]

Well, mostly everyone heard of Xenu, for some value of "heard of", so I'm not sure what's your point.

So the poll would still be very useful for demonstrating that the Basilisk is a highly non-central and peripheral topic.

Yeah. So far, though, it is so highly non central and so peripheral that you can't even add a poll question about it.

edit:

(At this point, isn't it literally exactly one person, Eliezer?)

Roko, someone claimed to have had nightmares about it... who knows if they still believe, and whoever else believes? Scientology is far older (and far bigger), there been a lot of insider leaks which is where we know the juicy stuff from.

As for how many people believe in "Basilisk", given various "hint hint there's a much more valid version out there but I won't tell it to you" type statements and repeat objections along the lines of "that's not a fair description of the Basilisk, it makes a lot more sense than you make it out to be", it's a bit slippery with regards to what we mean by Basilisk.

Comment author: PhilGoetz 09 February 2015 08:59:56PM *  0 points [-]

The actual reality does not have high level objects such as nematodes or humans.

Um... yes, it does. "Reality" doesn't conceptualize of them, but I, the agent analyzing the situation, do. I will have some function that looks at the underlying reality and partitions it into objects, and some other function that computes utility over those objects. These functions could be composed to give one big function from physics to utility. But that would be, epistemologically, backwards.

Before one could even consider an utility of a human (or a nematode) 's existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it's value, and so on.

No. Utility is a thing agents have. "Utility theory" is a thing you use to compute an agent's desired action; it is therefore a thing that only intelligent agents have. Space doesn't have utility. To quote (perhaps unfortunately) Žižek, space is literally the stupidest thing there is.

Comment author: private_messaging 14 February 2015 11:50:45PM *  1 point [-]

Before one could even consider an utility of a human (or a nematode) 's existence

No. Utility is a thing agents have.

'one' in that case refers to an agent who's trying to value feelings that physical systems have.

I think there's some linguistic confusion here. As an agent valuing that there's no enormous torture camp set up in a region of space, I'd need to have an utility function over space, which gives the utility of that space.

Comment author: Nornagest 03 February 2015 04:25:56AM *  0 points [-]

Okay, they joke about it. Just not the kind of joke that draws attention to the thing they're worried about; it'd be too close to home, like making a dead baby joke at a funeral. Jokes minimizing or exaggerating the situation -- a type of deflection -- are more likely; Kool-Aid jokes wouldn't be out of the question, for example.

Why the ellipsis?

Comment author: private_messaging 03 February 2015 05:11:18AM *  0 points [-]

Well, presumably one who's joining a doomsday cult is most worried about the doomsday (and would be relieved if it was just a bullshit doomsday cult). So wouldn't that be a case of jokes minimizing the situation as it exists in the speaker's mind? The reason that NORAD joke of yours is funny to either of us, is that we both believe it can actually cause an extreme catastrophe, which is uncomfortable for us. Why wouldn't a similar joke referencing a false doomsday not be funny to one who believes in said false belief as strongly as we believe in nuclear weapons?

Why the ellipsis?

To indicate that a part was omitted.

Comment author: Nornagest 03 February 2015 04:13:18AM *  1 point [-]

Yes, I understand the statistics you're trying to point to. I just don't think it's as simple as narrowing down the reference class. I expect material differences in behavior between the cases "joining a doomsday cult or something that could reasonably be mistaken for one" and "joining something that kinda looks enough like a doomsday cult that jokes about it are funny, but which isn't", and those differences mean that this can't be solved by a single application of Bayes' Rule.

Maybe your probability estimate ends up higher by epsilon or so. That depends on all sorts of fuzzy readings of context and estimations of the speaker's character, far too fuzzy for me to do actual math to it. But I feel fairly confident in saying that it shouldn't adjust that estimate enough to justify taking any sort of action, which is what actually matters here.

Comment author: private_messaging 03 February 2015 04:47:30AM 1 point [-]

Well, a doomsday cult is not only a doomsday cult but also kinda looks enough like a doomsday cult, too. Of people joining something that kinda looks enough like a doomsday cult, some are joining an actual doomsday cult. Those people, do they, in your model, know that they're joining a doomsday cult, so they can avoid joking about it?

Comment author: Nornagest 03 February 2015 04:02:41AM *  0 points [-]

It's when it is a probable doomsday cult that you try to argue it isn't by hoping that others laugh along with you.

Not in my experience. If people are scared that they're doing something potentially life-ruining like joining a cult -- and my first college roommate did drop out to join an ashram, so I know whereof I speak -- they don't draw attention to it by joking about it. They argue, or they deflect, or they clam up.

I'd expect the number of people who joined doomsday cults and made jokes like Alicorn's to be approximately zero.

Comment author: private_messaging 03 February 2015 04:22:45AM *  1 point [-]

If people are scared that they're doing something potentially life-ruining

...

I'd expect the number of people who joined doomsday cults and made jokes like Alicorn's to be approximately zero.

I would be very surprised if this was true. My experience mirrors what Jiro said - people tend to joke about things that scare them. Of course, some would clam up (keep in mind that a clammed up individual may have joked about it before and the joke was not well received, or may be better able to evaluate the lack of humour in such jokes)

Comment author: Nornagest 03 February 2015 03:55:19AM *  0 points [-]

There's a lot of doomsdays out there. My first assumption, if I was talking to someone outside core rationalist demographics, would probably be climate change advocacy or something along those lines -- though I'd probably find it funnier if they were joining NORAD.

Comment author: private_messaging 03 February 2015 04:08:07AM 0 points [-]

Well, you start with a set containing google, mcdonalds, and all other organizations one could be joining, inclusive of all doomsday cults, and then you end up with a much smaller set of organizations, inclusive of all doomsday cults. Which ought to boost the probability of them joining an actual doomsday cult, even if said probability would arguably remain below 0.5 or 0.9 or what ever threshold of credence.

Comment author: Nornagest 03 February 2015 03:30:40AM *  1 point [-]

In those words? Yes. You may note that those are different words than Alicorn's, or any of mine.

ETA: Wow, got seriously ninjaed there. I'll expand. It's not the "I don't consider this a cult" part of the message that'd make me update away from the surface meaning so much as the "...and I expect you to get the joke" part. That trades on information, even if you don't know it, that the speaker expects you to know. The speaker believes not only that they're not joining a cult but that it's obvious they're not, or at most clear after a moment's thought; otherwise it wouldn't be funny.

Comment author: private_messaging 03 February 2015 03:57:54AM *  0 points [-]

That trades on information, even if you don't know it, that the speaker expects you to know. The speaker believes not only that they're not joining a cult but that it's obvious they're not, or at most clear after a moment's thought; otherwise it wouldn't be funny.

Well, if the speaker got a job at Google or McDonalds, it would be far more obvious that they're not joining a doomsday cult... yet it seems to me that they wouldn't be joking it's a doomsday cult out of the blue then. It's when it is a probable doomsday cult that you try to argue it isn't by hoping that others laugh along with you.

View more: Prev | Next