Comment author: 20 July 2014 08:48:53PM 4 points [-]

Assume that each player's hand may tremble with a small non-zero probability p, then take the limit as p approaches zero from above.

Comment author: 20 July 2014 09:35:50PM *  10 points [-]

... Let's do that!

Simple model: A plays A, B and C with probabilities a, b, and c, with the constraint that each must be above the trembling probability t (=p/3 using the p above). (Two doesn't tremble for simplicity's sake)

Two picks X with probability x and Y with probability (1-x).

So their expected utilities are:

One: 3a + 2b+6c(1-x)

Two: 2b(1-x) + cx = 2*b + (c - 2b) x

It seems pretty clear that One wants b to be as low as possible (either a or c will always be better), so we can set b=t.

So One's utility is (constant) - 3c+6c -6cx

So One wants c to maximize (1-2x)c, and Two wants x to maximize (c-2t)c

The Nash equilibrium is at 1-2x=0 and c-2t=0, so c=2t and x=0.5

So in other words, if One's hand can tremble than he should also sometimes deliberately pick C to make it twice as likely as B, and Two should flip a coin.

(and as t converges towards 0, we do indeed get One always picking A)

Comment author: 18 July 2014 10:27:19AM *  5 points [-]

Eliezer Yudkowsky's reasons for banning Roko's post have always been somewhat vague. But I don't think he did it solely because it could cause some people nightmares.

(1) In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):

I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)

…and further…

For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.

His comment indicates that he doesn’t believe that this could currently work. Yet he also does not seem to dismiss some current and future danger. Why didn’t he clearly state that there is nothing to worry about?

(2) The following comment by Mitchell Porter, to which Yudkowsky replies “This part is all correct AFAICT.”:

It’s clear that the basilisk was censored, not just to save unlucky susceptible people from the trauma of imagining that they were being acausally blackmailed, but because Eliezer judged that acausal blackmail might actually be possible. The thinking was: maybe it’s possible, maybe it’s not, but it’s bad enough and possible enough that the idea should be squelched, lest some of the readers actually stumble into an abusive acausal relationship with a distant evil AI.

If Yudkowsky really thought it was irrational to worry about any part of it, why didn't he allow people to discuss it on LessWrong, where he and others could debunk it?

Comment author: 18 July 2014 11:53:02AM 3 points [-]

There were several possible fairly-good reasons for deleting that post, and also fairly good reasons for giving Eliezer some discretion as to what kind of stuff he can ban. Going over those reasons (again) is probably a waste of everybody's times. Who cares about whether a decision taken years ago was sensible, or slightly-wrong-but-within-reason, or wrong-but-only-in-hindsight, etc. ?

Comment author: 18 July 2014 04:18:43AM 1 point [-]

I think your thought process is indicative of our modern age. It is hard to concept of nations in the online environment that, at least for the time being, remains borderless. Nationalism has produced 2 world wars and several smaller conflicts and given its impetus in 1648 as a way of allowing kingdoms and countries to decide their religion without outside influence I think the 21st century will see fewer borders rather than more borders. That being said a quick read of any news site would make that statement hard to believe as nations get more nationalistic. However, advances in communication, transportation, and the movement of good, services, capital, and people will make borders less and less meaningless. What is a nation? What is geographical area? What is a political border? Our economy and our way of working is reducing that meaning. 30 years ago your country was the sum total of your upbringing, language, culture, shared values, and chances in life. Movement was long and hard and fraught with difficulty. That is becoming far less in our modern age. I think we all will feel tied to where we were born and grew up because its comfortable, its homey, we know the back routes, we know the cultural rules and being tribal as humans are we generally fit into that tribe. Those are all important things. Your personal identity is never, nor has it ever been tied to a passport. It is just a little book that tells other countries where you are coming from and some information about you. It is not you, it a part of you that is on paper for movement purposes. The lesson in life that you are more often than not wrong is nothing to be afraid of or anything to cause you distress but rather you are finding the juicy part of the human existence.

Comment author: 18 July 2014 08:30:26AM 3 points [-]

That being said a quick read of any news site would make that statement hard to believe as nations get more nationalistic.

News sites don't provide much evidence of increase of nationalist sentiment, unless you take into account the usual spin anytime something bad is reported: "This bad thing happened! Things are getting worse!"

Comment author: 18 July 2014 06:50:22AM 10 points [-]

Looks like a fairly standard parable about how we should laugh at academic theorists and eggheads because of all those wacky things they think. If only Less Wrong members had the common sense of the average Salon reader, then they would instantly see through such silly dilemmas.

Giving people the chance to show up and explain that this community is Obviously Wrong And Here's Why is a pretty good way to start conversations, human nature being what it is. An opportunity to have some interesting dialogues about the broader corpus.

That said, I am in the camp that finds the referenced 'memetic hazard' to be silly. If you are the sort of person who takes it seriously, this precise form of publicity might be more troubling for the obvious 'hazard' reasons. Out of curiosity, what is the fraction of LW posters that believes this is a genuine risk?

Comment author: 18 July 2014 07:30:14AM 14 points [-]

Out of curiosity, what is the fraction of LW posters that believes this is a genuine risk?

Vanishingly small - the post was deleted by Eliezer (was that what, a year ago? two?) because it gave some people he knew nightmares, but I don't remember anybody actually complaining about it. Most of the ensuing drama was about whether Eliezer was right in deleting it. The whole thing has been a waste of everybody's time and attention (as community drama over moderation almost always is).

Comment author: 17 July 2014 08:09:49AM *  1 point [-]

As a little side project, I entertain myself with the idea of writing fiction that blends fantasy and mega-structures engineering.
The first step will be to ideate a consistent magic system, but of course, to make the story interesting, I'll have to come up with interesting characters and their conflicts. Do you know about any good story, long or short, that revolves around or has as background mega-structures, that I can be inspired from? Fantasy or extreme science-fiction would be the best.

Comment author: 17 July 2014 08:21:49AM 2 points [-]

Rendez-vous with Rama?

Comment author: 16 July 2014 09:45:04AM *  5 points [-]

"Identity for NPCs" sounds a bit dismissive for a mechanism that works pretty well (to me it sounds close to "cooperating is for suckers, rationalists should defect").

Oh. Uhm, if we put it to the Prisonner's Dilemma language, I'd rather say -- rational people can analyze the situation and choose with whom to cooperate even if the other person is different, but stupid people need some simple and safe algorithm, such as: "cooperate with identical copies of myself, defect against everyone else".

Which works decently, if you have many identical copies at the same place, interacting mostly with each other. You cannot be exploited by someone using a smart algorithm. That's a relatively impressive outcome for such a simple algorithm.

The disadvantage is that you can't cooperate even with people using functionally identical algorithms with different group markers (e.g. using a different language, or dressing differently). And you have to suppress your deviations from the official norm (e.g. a minority sexual orientation). And you have a barrier to self-improvement, because the improved version is by definition different than the original one. -- On the other hand, if you manage to somehow artificially impose some positive change on everyone at the same time (or very slowly during a long time period), then the positive change will be preserved. Unfortunately, it works the same way with negative changes.

Comment author: 16 July 2014 12:11:00PM *  1 point [-]

Oh. Uhm, if we put it to the Prisonner's Dilemma language, I'd rather say -- rational people can analyze the situation and choose with whom to cooperate even if the other person is different, but stupid people need some simple and safe algorithm, such as: "cooperate with identical copies of myself, defect against everyone else".

It's not clear that someone analyzing the situation will choose to cooperate - plenty of smart people have argued that the rational behavior in prisoner's dilemma is to defect.

And I would argue that even for smart people, a simple algorithm is more likely to get everyone on board; a complicated but "better" solution which no one else follows (and that would only count as "better" if everybody was following it) is not worth following.

Comment author: 15 July 2014 08:19:47AM *  4 points [-]

Now you made me want to make a rational argument for nationalism. Uhm... let's try this:

Imagine that some cultures do things that you consider horrible (e.g. genital mutilation, "honor" killing, killing people for blasphemy or sexual orientation, etc.). Your culture doesn't do that. However, for some reasons, the populations in the other cultures are growing, the population in your culture is not, people immigrate to your country and bring their horrible behavior with them. You are afraid that unless something is done against this, the repulsive behavior will become a norm in your country, too. Maybe not the majority norm, but still something that is more or less tolerated, because no mainstream politician would risk making too many voters angry. How to stop this?

Appealing to national feelings may be a realistic strategy. (The point is not to invent something that intellectuals would agree with, but something that has a realistic chance to get a popular support.) You can try to make people more proud of your local culture, and emphasise that not doing X is an important part of what makes this country great. Thus you get a strong political force against X.

Related: "Use Your Identity Carefully". -- My analogy is that nationalism is identity on the mass level (identity for NPCs?). It is a tool to preserve a group of memes, both good and bad. Instead of throwing away the tool, you can try to increase the proportion of the good memes in the mix.

Comment author: 16 July 2014 08:05:43AM 5 points [-]

Having a strong national identity, and norms around self-sacrifice in favor of "The Fatherland" seems like pretty good instrumental rationality, in terms of coordination, enforcing cooperation, economies of scale, etc. "Identity for NPCs" sounds a bit dismissive for a mechanism that works pretty well (to me it sounds close to "cooperating is for suckers, rationalists should defect").

I don't think "rejecting weird foreign norms" has much to do with the historical causes of nationalist sentiment; I'd say it had more to do with enforcing strong norms over weak norms (for example, speaking the same language, not engaging in nepotism, obeying the law, sending your children to school, taking up arms to defend the country), and putting the primary focus of loyalty on the country (and not the king, the church, your family or your village). Foreigners from far away with different norms is a more recent phenomenon.

Comment author: 14 July 2014 11:48:31PM *  5 points [-]

The term "nationalism" is used in at least two very different ways. The particularist use is more accurately termed "national chauvinism", usually but not always ethnically-based, is the idea that one's own nation is in some way better than all the others, and the interests of its people should be accorded disproportionate weight. Note that this kind of nationalist doesn't necessarily care about political organization outside of his own country; he has an ideology about his nation, not necessarily about nations in general.

I would agree that used in this sense, "nationalism" is basically indefensible.

There is a different, generalist use of the term "nationalism," however, which traces academically to people like Ernest Gellner, and philosophically, arguably back to people like Friedrich List. Nationalism in this sense, is merely the proposition, "National boundaries should coincide with state boundaries." Importantly, it doesn't require ethnically-defined nations, merely people who self-identify as being part of a common national community, whether that be based on blood, culture, or something else. A natural corollary of this view of nations and nationalism is that, at least in the world as it actually exists now, everyone is either a nationalist or an imperialist (one could carve out a small exception for anarchists).

In this generalist sense of "nationalism," which makes claims not about "my nation" but about "all nations," I think there are tradeoffs on both sides. I identify as an somewhat ambivalent nationalist. But unlike the the first sense, I don't think you can argue that the nationalist position is prima facie inferior from a consequentialist standpoint.

Comment author: 15 July 2014 08:25:19AM *  11 points [-]

The particularist use is more accurately termed "national chauvinism", usually but not always ethnically-based, is the idea that one's own nation is in some way better than all the others, and the interests of its people should be accorded disproportionate weight.

The "in some way better than all others" bit isn't a very charitable reading of that position; if a Frenchman wants the french government to further the interests of France and Frenchmen (even at the expense of other countries), then it's a form of nationalism but doesn't include a belief that "France is better than the rest"; it's only that he cares more about France than about the rest.

Having diminishing "circles of empathy" for others depending on whether they're in your family, your city, your country, your religion or race etc. is pretty normal, but there's variance about what levels are considered as more important; (pretty much) everybody cares more about their family, but some may see religion or political affiliation or their city as a "more important" identity than one's country (this is assuming country = nation, which as you say is usually the case in the West now); "national chauvinists" would be the ones who put their country above other identities.

Comment author: 10 July 2014 03:50:50PM 1 point [-]

(Yes this is a fully general argument in favor of the climatic status quo)

This actually looks like a fully general argument if favor of any status quo.

Comment author: 10 July 2014 04:44:50PM 1 point [-]

Are you saying my argument proves too much?

I agree that it's an argument that can be used in favor of status quo in a lot of situations (it's similar to Chesterton's Fence), but it won't apply as strongly; the argument mostly requires that changing the status quo disrupts system that:

• Impact human welfare a lot, and
• Are slow to re-stabilize

... so a good argument can be made that climate does that, but the effect is less strong for national politics, and even less strong for things like corporate policies, roles inside a family, etc.

Comment author: 08 July 2014 04:18:30PM 9 points [-]

I think that the Earth will get warmer in the same way that I think exercise is good for longevity. I'm not competent to evaluate the arguments or even evaluate the people who evaluate the arguments, but there seems to be enough evidence from different sources to make me reasonably confident.

I have no idea what climate change's impact will be on people. Are current global temperatures optimized for human welfare? Probably not. Will making temperatures warmer be an overall gain or loss to human welfare? I have no clue. Is the cheapest way to avoid harms reducing CO2 emissions, or building levees, or moving everyone 10 miles inland, or putting mirrors in the stratosphere, or something else? Again, no clue.

I am encouraged that people are taking the welfare of humans hundreds of years in the future seriously. I am discouraged that the discussion on climate change is dominated by if it real, not what will its effects be and how can we optimize the upside and minimize the downside.

Comment author: 10 July 2014 03:43:39PM 1 point [-]

Are current global temperatures optimized for human welfare?

A lot of systems are optimized for current temperatures, including: ecosystems, agriculture, the economy, coastal habitations, etc. - and degradation of those system is bad for human welfare; they would eventually be re-balanced, but at a cost.

(Yes this is a fully general argument in favor of the climatic status quo)

View more: Next