Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Stuart_Armstrong 07 April 2014 06:31:38PM 1 point [-]

Is the new one more acceptable?

Comment author: MugaSofer 08 April 2014 10:45:43AM 2 points [-]

See, now I'm curious about the old image...

Comment author: Jayson_Virissimo 02 April 2014 09:49:31PM 2 points [-]

Dogs know how to swim, but it’s unlikely they know any truths describing their activities.

-- Richard Fumerton, Epistemology

Comment author: MugaSofer 08 April 2014 08:50:29AM 3 points [-]

Really? So, say, if I put a bone on the other side of the river, the dog doesn't know that it can swim across?

Comment author: army1987 30 March 2014 07:19:10AM -1 points [-]

Well, yeah, if an authority says that the Hubble constant is 68 km/s/Mpc and you interpret that in such a way that if the Hubble constant is 67 km/s/Mpc or 69 km/s/Mpc the authority is wrong, then you'd better assume the authority is wrong -- though that's not a very informative assumption and anyway in the real world authorities often make claims more vague (i.e. whose negation is less vague) than that.

Comment author: MugaSofer 31 March 2014 06:29:21PM *  2 points [-]

Well, sure, you have a relatively high opinion of authority. But even a vague claim needs the expert to be significantly better than chance to be promoted to your attention.

(And, well, this guy isn't a physicist. He's involved in medicine. There's less likely to be clear-cut experimental proof the expert can simply point out in order to overcome your prior. Instead, you get a huge, enormous pile of famous cases where The Experts Screwed Up.)

Comment author: DanielLC 25 March 2014 09:49:39PM 0 points [-]

How did they know they considered leaving a thank-you note? Did they confess afterwards?

Comment author: MugaSofer 27 March 2014 03:47:37PM *  4 points [-]

Yes.

Well. Sort of. Not exactly.

They were stealing government files on domestic surveillance in order to leak them - the quote comes from the journalist who was their press contact (see the link beneath the quote.)

Comment author: army1987 09 March 2014 01:43:02PM 5 points [-]

You are less likely to do the wrong thing if you believe that ‘the authorities are always wrong,’

Reverse stupidity is not intelligence.

Comment author: MugaSofer 27 March 2014 03:34:56PM *  0 points [-]

It's not reversed stupidity. It's an antiprediction.


... which does not make it correct, of course.

Comment author: Manfred 26 March 2014 04:06:27AM 1 point [-]

"Just looking at the data like a scientist" does not give you magic scientist powers. Models of the world are what allow you to predict it, without need for magic scientist vision.

Comment author: MugaSofer 27 March 2014 02:48:49PM -1 points [-]

I ... think he's talking about basic correlation, statistical analysis, that sort of thing?

(I enjoy Scott's writing, but I didn't upvote the grandparent.)

Comment author: army1987 22 February 2014 08:45:50AM 3 points [-]

What I'm saying is that to argue that our ancestors were sexual omnivores is no more a criticism of monogamy than to argue that our ancestors were dietary omnivores is a criticism of vegetarianism.

-- Christopher Ryan

Comment author: MugaSofer 26 February 2014 09:53:08AM *  1 point [-]

Because our ancestors were only omnivores on those relatively rare occasions they could pull it off, and had to be able to function without it, because it was often impossible or very hard?

Oh. Huh.

Yeah, I can see how that might be roughly analogous. It didn't sound like it, just based on the quote ...

DRAFT:Ethical Zombies - A Post On Reality-Fluid

1 MugaSofer 09 January 2013 01:38PM

I came up with this after watching a science fiction film, which shall remain nameless due to spoilers, where the protagonist is briefly in a similar situation to the scenario at the end. I'm not sure how original it is, but I certainly don't recall seeing anything like it before.


Imagine, for simplicity, a purely selfish agent. Call it Alice. Alice is an expected utility maximizer, and she gains utility from eating cakes. Omega appears and offers her a deal - they will flip a fair coin, and give Alice three cakes if it comes up heads. If it comes up tails, they will take one cake away her stockpile. Alice runs the numbers, determines that the expected utility is positive, and accepts the deal. Just another day in the life of a perfectly truthful superintelligence offering inexplicable choices.


The next day, Omega returns. This time, they offer a slightly different deal - instead of flipping a coin, they will perfectly simulate Alice once. This copy will live out her life just as she would have done in reality - except that she will be given three cakes. The original Alice, however, receives nothing. She reasons that this is equivalent to the last deal, and accepts.

 

(If you disagree, consider the time between Omega starting the simulation and providing the cake. What subjective odds should she give for receiving cake?)


Imagine a second agent, Bob, who gets utility from Alice getting utility. One day, Omega show up and offers to flip a fair coin. If it comes up heads, they will give Alice - who knows nothing of this - three cakes. If it comes up tails, they will take one cake from her stockpile. He reasons as Alice did an accepts.


Guess what? The next day, Omega returns, offering to simulate Alice and give her you-know-what (hint: it's cakes.) Bob reasons just as Alice did in the second paragraph there and accepts the bargain.


Humans value each other's utility. Most notably, we value our lives, and we value each other not being tortured. If we simulate someone a billion times, and switch off one simulation, this is equivalent to risking their life at odds of 1:1,000,000,000. If we simulate someone and torture one of the simulations, this is equivalent to risking a one-in-a-billion chance of them being tortured. Such risks are often acceptable, if enough utility is gained by success. We often risk our own lives at worse odds.


If we simulate an entire society a trillion times, or 3^^^^^^3 times, or some similarly vast number, and then simulate something horrific - an individual's private harem or torture chamber or hunting ground - then the people in this simulation *are not real*. Their needs and desires are worth, not nothing, but far less then the merest whims of those who are Really Real. They are, in effect, zombies - not quite p-zombies, since they are conscious, but e-zombies - reasoning, intelligent beings that can talk and scream and beg for mercy but *do not matter*.


My mind rebels at the notion that such a thing might exist, even in theory, and yet ... if it were a similarly tiny *chance*, for similar reward, I would shut up and multiply and take it. This could be simply scope insensitivity, or some instinctual dislike of tribe members declaring themselves superior.


Well, there it is! The weirdest of Weirdtopias, I should think. Have I missed some obvious flaw? Have I made some sort of technical error? This is a draft, so criticisms will likely be encorporated into the final product (if indeed someone doesn't disprove it entirely.)

 

[LINK] AI-boxing Is News, Somehow

5 MugaSofer 19 October 2012 12:34PM

Tech News Daily have published an article advocating AI-boxing, which namechecks Eliezer's AI-Box Experiment. It seems to claim AI-Boxes are a revolutionary new idea; is covered in pictures of fictional, evil AIs; and worries that a superintelligent AI might develop psychic powers.

Seems like a good reminder of the state of reporting on such matters.

Comment author: ArisKatsaris 19 October 2012 09:40:29AM *  1 point [-]

If Parfit's hitchhiker "updates" on the fact that he's now reached the city and therefore doesn't need to pay the driver, and furthermore if Parfit's hitchhiker knows in advance that he'll update on that fact in that manner, then he'll die.

If right now we had mindscanners/simulators that could perform such counterfactual experiments on our minds, and if this sort of bet could therefore become part of everyday existence, being the sort of person that pays the counterfactual mugger would eventually be seen by all to be of positive-utility -- because such people would eventually be offered the winning side of that bet (free money in the tenfold of your cost).

While the sort of person that wouldn't be paying the counterfactual mugger would never be given such free money at all.

Comment author: MugaSofer 19 October 2012 10:29:51AM 0 points [-]

If, and only if, you regularly encounter such bets.

View more: Next