Comment author: DanielH 17 October 2013 05:47:28AM 1 point [-]

The aliens with star communication weren't destroyed. They were close enough to "human" that they were uploaded or ignored. What's more, CelestAI would probably satisfy (most of) the values of these aliens, who probably find "friendship" just as approximately-neutral as they and we find "ponies".

Comment author: Philip_W 18 August 2015 07:51:05PM 3 points [-]

Read it more carefully. One or several paragraphs before the designated-human aliens, it is mentioned that CelestAI found many sources of complex radio waves which weren't deemed "human".

In response to comment by [deleted] on Rationality Quotes Thread May 2015
Comment author: [deleted] 27 May 2015 02:33:41PM *  3 points [-]

This... really shows how wide is the Atlantic. I know many Europeans who identify with being Christians, but in all of the cases it is just a way to show their national loyalty, their national identity, their conservatism or their opposition to modern culture, their preference for a higher value system that does not worship money and business and consumption but has a more human-faced, soul-oriented, "deeper" approach. This is how they are Christians. Nobody, literally nobody I know has literal faith, the kind of faith people would pray with. The closest to that is having a faith in Christian values being useful for human growth because they remind people that money and consumption is not all.

So it is always surprising to me that America has pockets where faith is still alive pretty much in the old, pre-1800 sense, as if Voltaire, Hegel, Feuerbach or Marx never happened. Where it is not a culture or identity or values, but literally faith.

Or maybe these pockets exist here too, but the newspapers are not writing about them and I have no idea where they are.

In response to comment by [deleted] on Rationality Quotes Thread May 2015
Comment author: Philip_W 28 July 2015 09:45:03AM 0 points [-]

From your username it looks like you're Dutch (it is literally "the flying Dutchman" in Dutch), so I'm surprised you've never heard of the Dutch bible belt and their favourite political party, the SGP. They get about 1.5% of the vote in the national elections and seem pretty legit. And those are just the Christians fervent enough to oppose women's suffrage. The other two Christian parties have around 15% of the vote, and may contain proper believers as well.

Comment author: MathieuRoy 05 February 2014 12:07:27PM *  1 point [-]

Do you mean "I cooperate with the Paperclipper if AND ONLY IF I think it will one-box on Newcomb's Problem with myself as Omega AND I think it thinks I'm Omega AND I think it thinks I think it thinks I'm Omega, etc." ? This seems to require an infinite amount of knowledge, no?

Edit: and you said "We have never interacted with the paperclip maximizer before", so do you think it would one-box?

Comment author: Philip_W 25 June 2015 09:32:16AM 0 points [-]

I think he means "I cooperate with the Paperclipper IFF it would one-box on Newcomb's problem with myself (with my present knowledge) playing the role of Omega, where I get sent to rationality hell if I guess wrong". In other words: If Elezier believes that if Elezier and Clippy were in the situation that Elezier would prepare for one-boxing if he expected Clippy to one-box and two-box if he expected Clippy to two-box, Clippy would one-box, then Elezier will cooperate with Clippy. Or in other words still: If Elezier believes Clippy to be ignorant and rational enough that it can't predict Elezier's actions but uses game theory at the same level as him, then Elezier will cooperate.

In the uniterated prisoner's dilemma, there is no evidence, so it comes down to priors. If all players are rational mutual one-boxers, and all players are blind except for knowing they're all mutual one-boxers, then they should expect everyone to make the same choice. If you just decide that you'll defect/one-box to outsmart others, you may expect everyone to do so, so you'll be worse off than if you decided not to defect (and therefore nobody else would rationally do so either). Even if you decide to defect based on a true random number generator, then for

(2,2) (0,3)

(3,0) (1,1)

the best option is still to cooperate 100% of the time.

If there are less rational agents afoot, the game changes. The expected reward for cooperation becomes 2(xr+(1-d-r)) and the reward for defection becomes 3(xr+(1-d-r))+d+(1-x)r=1+2(xr+(1-d-r)), where r is the fraction of agents who are rational, d is the fraction expected to defect, x is the probability with which you (and by extension other rational agents) will cooperate, and (1-d-r) is the fraction of agents who will always cooperate. Optimise for x in 2x(xr+(1-d-r))+(1-x)(1+2(xr+(1-d-r)))=1-x+2(xr-1-d-r)=x(2r-1)-(1+2d+2r); which means you should cooperate 100% of the time if the fraction of agents who are rational r > 0.5, and defect 100% of the time if r < 0.5.

In the iterated prisoner's dilemma, this becomes more algebraically complicated since cooperation is evidence for being cooperative. So, qualitatively, superintelligences which have managed to open bridges between universes are probably/hopefully (P>0.5) rational, so they should cooperate on the last round, and by extension on every round before that. If someone defects, that's strong evidence to them not being rational or having bad priors, and if the probability of them being rational drops below 0.5, you should switch to defecting. I'm not sure if you should cooperate if your opponent cooperates after defecting on the first round. Common sense says to give them another chance, but that may be anthropomorphising the opponent.

If the prior probability of inter-universal traders like Clippy and thought experiment::Elezier is r>0.5, and thought experiment::Elezier has managed not to make his mental makeup knowable to Clippy and vice versa, then both Elezier and Clippy ought to expect r>0.5. Therefore they should both decide to cooperate. If Elezier suspects that Clippy knows Elezier well enough to predict his actions, then for Elezier 'd' becomes large (Elezier suspects Clippy will defect if Elezier decides to cooperate). Elezier unfortunately can't let himself be convinced that Clippy would cooperate at this point, because if Clippy knows Elezier, then Clippy can fake that evidence. This means both players also have strong motivation not to create suspicion in the other player: knowing the other player would still mean you lose, if the other player finds out you know. Still, if it saves a billion people, both players would want to investigate the other to take victory in the final iteration of the prisoner's dilemma (using methods which provide as little evidence of the investigation as possible; the appropriate response to catching spies of any sort is defection).

Comment author: stcredzero 29 May 2010 11:29:33PM 83 points [-]

I suspect that the True Prisoner's Dilemma played itself out in the Portugese and Spanish conquest of Mesoamerica. Some natives were said to ask, "Do they eat gold?" They couldn't comprehend why someone would want a shiny decorative material so badly, they'd kill for it. The Spanish were Shiny Decorative Material maximizers.

Comment author: Philip_W 25 June 2015 06:35:33AM 0 points [-]

In a sense they did eat gold, like we eat stacks of printed paper, or perhaps nowadays little numbers on computer screens.

Comment author: Eliezer_Yudkowsky 17 September 2014 06:11:39PM 13 points [-]

Rational agents cannot be successfully blackmailed by other agents that simulate them accurately, and especially not by figments of their own imagination.

Comment author: Philip_W 16 June 2015 05:51:07AM 1 point [-]

That doesn't seem true. How can the victim know for sure that the blackmailer is simulating them accurately or being rational?

Suppose you get mugged in an alley by random thugs. Which of these outcomes seems most likely:

  1. You give them the money, they leave.

  2. You lecture them about counterfactual reasoning, they leave.

  3. You lecture them about counterfactual reasoning, they stab you.

Any agent capable of appearing irrational to a rational agent can blackmail that rational agent. This decreases the probability of agents which appear irrational being irrational, but not necessarily to the point that you can dismiss them.

Comment author: Kaj_Sotala 28 November 2014 10:23:58AM 15 points [-]

I agree with the general gist of the post, but I would point out that different groups consider different things weird, and have differing opinions about what weirdness is a bad thing.

To use your "a guy wearing a dress in public" example - I do this occasionally, and gauging from the reactions I've seen so far, it seems to earn me points among the liberal, socially progressive crowd. My general opinions and values are such that this is the group that would already be the most likely to listen to me, while the people who are turned off by such a thing would be disinclined to listen to me anyway.

I would thus suggest, not trying to limit your weirdness, but rather choosing a target audience and only limiting the kind of weirdness that this group would consider freakish or negative, while being less concerned by the kind of weirdness that your target audience considers positive. Weirdness that's considered positive by your target audience may even help your case.

Comment author: Philip_W 09 February 2015 12:24:15AM 3 points [-]

I think I might have been a datapoint in your assessment here, so I feel the need to share my thoughts on this. I would consider myself socially progressive and liberal, and I would hate not being included in your target audience, but for me your wearing cat ears to the CFAR workshop cost you weirdness points that you later earned back by appearing smart and sane in conversations, by acceptance by the peer group, acclimatisation, etc.

I responded positively because it fell within the 'quirky and interesting' range, but I don't think I would have taken you as seriously on subjectively weird political or social opinions. It is true that the cat ears are probably a lot less expensive for me than cultural/political out-group weirdness signals, like a military haircut. It might be a good way to buy other points, so positive overall, but that depends on the circumstances.

Comment author: ike 05 January 2015 04:53:01PM 1 point [-]

Exactly, so only people who aren't aborted count as born, in which case the birth rate is 80%.

Comment author: Philip_W 06 January 2015 01:30:34PM 0 points [-]

Ah, "actual" threw me off. So you mean something close to "The lifetime projected probability of being born(/dying) for people who came into existence during the last year".

In response to comment by Philip_W on Tell Culture
Comment author: Vaniver 05 January 2015 10:23:07AM *  0 points [-]

If you hit the "show help" button to the bottom right, there's a link to polls help.

In response to comment by Vaniver on Tell Culture
Comment author: Philip_W 05 January 2015 12:01:25PM 0 points [-]

Thanks, edited.

In response to comment by SaidAchmiz on Tell Culture
Comment author: Philip_W 05 January 2015 08:32:45AM *  1 point [-]

I'm on the autism spectrum (PDD-NOS), and Tell culture sounds like a good idea to me.

Submitting...

In response to comment by Philip_W on Tell Culture
Comment author: Philip_W 05 January 2015 08:33:35AM 0 points [-]

Karma sink.

In response to comment by SaidAchmiz on Tell Culture
Comment author: Philip_W 05 January 2015 08:32:45AM *  1 point [-]

I'm on the autism spectrum (PDD-NOS), and Tell culture sounds like a good idea to me.

Submitting...

In response to comment by Philip_W on Tell Culture
Comment author: Philip_W 05 January 2015 08:33:26AM 0 points [-]

If you're on the autism spectrum and think Tell culture is a bad idea, upvote this comment.

View more: Prev | Next