Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

International cooperation vs. AI arms race

15 Brian_Tomasik 05 December 2013 01:09AM

Summary

I think there's a decent chance that governments will be the first to build artificial general intelligence (AI). International hostility, especially an AI arms race, could exacerbate risk-taking, hostile motivations, and errors of judgment when creating AI. If so, then international cooperation could be an important factor to consider when evaluating the flow-through effects of charities. That said, we may not want to popularize the arms-race consideration too openly lest we accelerate the race.

Will governments build AI first?

AI poses a national-security threat, and unless the militaries of powerful countries are very naive, it seems to me unlikely they'd allow AI research to proceed in private indefinitely. At some point the US military would confiscate the project from Google or Goldman Sachs, if the US military isn't already ahead of them in secret by that point. (DARPA already funds a lot of public AI research.)

There are some scenarios in which private AI research wouldn't be nationalized:

  • An unexpected AI foom before anyone realizes what was coming.
  • The private developers stay underground for long enough not to be caught. This becomes less likely the more government surveillance improves (see "Arms Control and Intelligence Explosions").
  • AI developers move to a "safe haven" country where they can't be taken over. (It seems like the international community might prevent this, however, in the same way it now seeks to suppress terrorism in other countries.)
Each of these scenarios could happen, but it seems most likely to me that governments would ultimately control AI development.

AI arms races

Government AI development could go wrong in several ways. Probably most on LW feel the prevailing scenario is that governments would botch the process by not realizing the risks at hand. It's also possible that governments would use the AI for malevolent, totalitarian purposes.

It seems that both of these bad scenarios would be exacerbated by international conflict. Greater hostility means countries are more inclined to use AI as a weapon. Indeed, whoever builds the first AI can take over the world, which makes building AI the ultimate arms race. A USA-China race is one reasonable possibility.

Arms races encourage risk-taking -- being willing to skimp on safety measures to improve your odds of winning ("Racing to the Precipice"). In addition, the weaponization of AI could lead to worse expected outcomes in general. CEV seems to have less hope of success in a Cold War scenario. ("What? You want to include the evil Chinese in your CEV??") (ETA: With a pure CEV, presumably it would eventually count Chinese values even if it started with just Americans, because people would become more enlightened during the process. However, when we imagine more crude democratic decision outcomes, this becomes less likely.)

Ways to avoid an arms race

Averting an AI arms race seems to be an important topic for research. It could be partly informed by the Cold War and other nuclear arms races, as well as by other efforts at nonproliferation of chemical and biological weapons.

Apart from more robust arms control, other factors might help:

  • Improved international institutions like the UN, allowing for better enforcement against defection by one state.
  • In the long run, a scenario of global governance (i.e., a Leviathan or singleton) would likely be ideal for strengthening international cooperation, just like nation states reduce intra-state violence.
  • Better construction and enforcement of nonproliferation treaties.
  • Improved game theory and international-relations scholarship on the causes of arms races and how to avert them. (For instance, arms races have sometimes been modeled as iterated prisoner's dilemmas with imperfect information.)
  • How to improve verification, which has historically been a weak point for nuclear arms control. (The concern is that if you haven't verified well enough, the other side might be arming while you're not.)
  • Moral tolerance and multicultural perspective, aiming to reduce people's sense of nationalism. (In the limit where neither Americans nor Chinese cared which government won the race, there would be no point in having the race.)
  • Improved trade, democracy, and other forces that historically have reduced the likelihood of war.

Are these efforts cost-effective?

World peace is hardly a goal unique to effective altruists (EAs), so we shouldn't necessarily expect low-hanging fruit. On the other hand, projects like nuclear nonproliferation seem relatively underfunded even compared with anti-poverty charities.

I suspect more direct MIRI-type research has higher expected value, but among EAs who don't want to fund MIRI specifically, encouraging donations toward international cooperation could be valuable, since it's certainly a more mainstream cause. I wonder if GiveWell would consider studying global cooperation specifically beyond its indirect relationship with catastrophic risks.

Should we publicize AI arms races?

When I mentioned this topic to a friend, he pointed out that we might not want the idea of AI arms races too widely known, because then governments might take the concern more seriously and therefore start the race earlier -- giving us less time to prepare and less time to work on FAI in the meanwhile. From David Chalmers, "The Singularity: A Philosophical Analysis" (footnote 14):

When I discussed these issues with cadets and staff at the West Point Military Academy, the question arose as to whether the US military or other branches of the government might attempt to prevent the creation of AI or AI+, due to the risks of an intelligence explosion. The consensus was that they would not, as such prevention would only increase the chances that AI or AI+ would first be created by a foreign power. One might even expect an AI arms race at some point, once the potential consequences of an intelligence explosion are registered. According to this reasoning, although AI+ would have risks from the standpoint of the US government, the risks of Chinese AI+ (say) would be far greater.

We should take this information-hazard concern seriously and remember the unilateralist's curse. If it proves to be fatal for explicitly discussing AI arms races, we might instead encourage international cooperation without explaining why. Fortunately, it wouldn't be hard to encourage international cooperation on grounds other than AI arms races if we wanted to do so.

ETA: Also note that a government-level arms race might be preferable to a Wild West race among a dozen private AI developers where coordination and compromise would be not just difficult but potentially impossible.

Robust Cooperation in the Prisoner's Dilemma

69 orthonormal 07 June 2013 08:30AM

I'm proud to announce the preprint of Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium via Provability Logic, a joint paper with Mihaly Barasz, Paul Christiano, Benja Fallenstein, Marcello Herreshoff, Patrick LaVictoire (me), and Eliezer Yudkowsky.

This paper was one of three projects to come out of the 2nd MIRI Workshop on Probability and Reflection in April 2013, and had its genesis in ideas about formalizations of decision theory that have appeared on LessWrong. (At the end of this post, I'll include links for further reading.)

Below, I'll briefly outline the problem we considered, the results we proved, and the (many) open questions that remain. Thanks in advance for your thoughts and suggestions!

Background: Writing programs to play the PD with source code swap

(If you're not familiar with the Prisoner's Dilemma, see here.)

The paper concerns the following setup, which has come up in academic research on game theory: say that you have the chance to write a computer program X, which takes in one input and returns either Cooperate or Defect. This program will face off against some other computer program Y, but with a twist: X will receive the source code of Y as input, and Y will receive the source code of X as input. And you will be given your program's winnings, so you should think carefully about what sort of program you'd write!

Of course, you could simply write a program that defects regardless of its input; we call this program DefectBot, and call the program that cooperates on all inputs CooperateBot. But with the wealth of information afforded by the setup, you might wonder if there's some program that might be able to achieve mutual cooperation in situations where DefectBot achieves mutual defection, without thereby risking a sucker's payoff. (Douglas Hofstadter would call this a perfect opportunity for superrationality...)

Previously known: CliqueBot and FairBot

And indeed, there's a way to do this that's been known since at least the 1980s. You can write a computer program that knows its own source code, compares it to the input, and returns C if and only if the two are identical (and D otherwise). Thus it achieves mutual cooperation in one important case where it intuitively ought to: when playing against itself! We call this program CliqueBot, since it cooperates only with the "clique" of agents identical to itself.

There's one particularly irksome issue with CliqueBot, and that's the fragility of its cooperation. If two people write functionally analogous but syntactically different versions of it, those programs will defect against one another! This problem can be patched somewhat, but not fully fixed. Moreover, mutual cooperation might be the best strategy against some agents that are not even functionally identical, and extending this approach requires you to explicitly delineate the list of programs that you're willing to cooperate with. Is there a more flexible and robust kind of program you could write instead?

As it turns out, there is: in a 2010 post on LessWrong, cousin_it introduced an algorithm that we now call FairBot. Given the source code of Y, FairBot searches for a proof (of less than some large fixed length) that Y returns C when given the source code of FairBot, and then returns C if and only if it discovers such a proof (otherwise it returns D). Clearly, if our proof system is consistent, FairBot only cooperates when that cooperation will be mutual. But the really fascinating thing is what happens when you play two versions of FairBot against each other. Intuitively, it seems that either mutual cooperation or mutual defection would be stable outcomes, but it turns out that if their limits on proof lengths are sufficiently high, they will achieve mutual cooperation!

The proof that they mutually cooperate follows from a bounded version of Löb's Theorem from mathematical logic. (If you're not familiar with this result, you might enjoy Eliezer's Cartoon Guide to Löb's Theorem, which is a correct formal proof written in much more intuitive notation.) Essentially, the asymmetry comes from the fact that both programs are searching for the same outcome, so that a short proof that one of them cooperates leads to a short proof that the other cooperates, and vice versa. (The opposite is not true, because the formal system can't know it won't find a contradiction. This is a subtle but essential feature of mathematical logic!)

Generalization: Modal Agents

Unfortunately, FairBot isn't what I'd consider an ideal program to write: it happily cooperates with CooperateBot, when it could do better by defecting. This is problematic because in real life, the world isn't separated into agents and non-agents, and any natural phenomenon that doesn't predict your actions can be thought of as a CooperateBot (or a DefectBot). You don't want your agent to be making concessions to rocks that happened not to fall on them. (There's an important caveat: some things have utility functions that you care about, but don't have sufficient ability to predicate their actions on yours. In that case, though, it wouldn't be a true Prisoner's Dilemma if your values actually prefer the outcome (C,C) to (D,C).)

However, FairBot belongs to a promising class of algorithms: those that decide on their action by looking for short proofs of logical statements that concern their opponent's actions. In fact, there's a really convenient mathematical structure that's analogous to the class of such algorithms: the modal logic of provability (known as GL, for Gödel-Löb).

So that's the subject of this preprint: what can we achieve in decision theory by considering agents defined by formulas of provability logic?

continue reading »

Details of Taskforces; or, Cooperate Now

15 paulfchristiano 05 April 2011 05:16PM

Recently I've spent a lot of time thinking about what exactly I should be doing with my life. I'm lucky enough to be in an environment where I can occasionally have productive conversations about the question with smart peers, but I suspect I would think much faster if I spent more of my time with a community grappling with the same issues. Moreover, I expect I could be more productive if I spent time with others trying to get similar things done, not to mention the benefits of explicit collaboration.

I would like to organize a nonstandard sort of meetup: regular gatherings with people who are dealing with the question "How do I do the most good in the world?" focused explicitly on answering the question and acting on the answer. If I could find a group with which I am socially compatible, I might spend a large part of my time working with them. I am going to use the term "taskforce" because I don't know of a better one. It is vaguely related to but quite different from the potential taskforces Eliezer discusses.

Starting such a taskforce requires making many decisions.

Size:

I believe that even two people who think through issues together and hold each other accountable are significantly more effective than two people working independently. At the other limit, eventually the addition of individuals doesn't increase the effectiveness of the group and increases coordination costs. Based on a purely intuitive feeling for group dynamics, I would feel most comfortable with a group of 5-6 until I knew of a better scheme for productively organizing large groups of rationalists (at which point I would want to grow as large as that scheme could support). I suspect in practice there will be huge constraints based on interest and commitment; I don't think this is a terminal problem, because there are probably significant gains even for 2-4 people, and I don't think its a permanent one, because I am optimistic about our ability as a community to grow rapidly.

Frequency:

Where I am right now in life, I believe that thinking about this question and gathering relevant evidence is the most important thing for me to be doing. I would be comfortable spending several hours several times a week working with a group I got along with. Due to scheduling issues and interest limitations, I think this means that I would like to invest as much time as schedules and interests allow. I think the best plan is to allow and expect self-modification: make the choice of time-commitment an explicit decision controlled by the group. Meeting once a week seems like a fair default which can be supported by most schedules.

Concreteness:

There are three levels of concreteness I can imagine for the initial goals of a taskforce:

  • The taskforce is created with a particular project or a small collection of possible projects in mind. Although the possibility of abandoning a project is available (like all other changes), having a strong concrete focus may help a great deal with maintaining initial enthusiasm, attracting people, and fostering a sense of having a real effect on the world rather than empty theorizing. The risk is that, while I suspect many of us have many good ideas, deciding what projects are best is really an important part of why I care about interacting with other people. Just starting something may be the quickest way to get a sense of what is most important, but it may also slow progress down significantly.
  • The taskforce is created with the goal of converging to a practical project quickly. The discussion is of the form "How should we be doing the most good right now: what project are we equipped to solve given our current resources?" While not quite as focused as the first possibility, it does at least keep the conversation grounded.
  • The taskforce is created with the most open-ended possible goal. Helping its members decide how to spend their time in the coming years is just as important as coming up with a project to work on next week. A particular project is adopted only if the value of that project exceeds the value of further deliberation, or if working on a project is a good way to gather evidence or develop important skills.

I am inclined towards the most abstract level if it is possible to get enough support, since it is always capable of descending to either of the others. I think the most important question is how much confidence you have in a group of rationalists to understand the effectiveness of their own collective behavior and modify appropriately. I have a great deal, especially when the same group meets repeatedly and individuals have time to think carefully in between meetings.

Metaness:

A group may spend a long time discussing efficient structures for organizing, communicating, gathering information, making decisions, etc. Alternatively, a group may avoid these issues in favor of actually doing things--even if by doing things we only mean discussing the issues the group was created to discuss. Most groups I have been a part of have very much tried to do things instead of refining their own processes.

My best plan is to begin by working on non-meta issues. However, the ability of groups of rationalists to efficiently deliberate is an important one to develop, so it is worth paying a lot of attention to anything that reduces effectiveness. In particular, I would support very long digressions to deal with very minor problems as long as they are actually problems. Our experiences can be shared, any question answered definitively remains answered definitively, and any evidence gathered is there for anyone else who wants to see it. A procedural digression should end when it is no longer the best use of time--not because of a desire to keep getting things done for the sake of getting things done. Improving our rationality as individuals should be treated similarly; I am no longer interested in setting out to improve my rationality for the sake of becoming more rational, but I am interested in looking very carefully for failures of rationality that actually impact my effectiveness.

I can see how this approach might be dangerous; but it has the great advantage of being able to rescue itself from failure, by correctly noticing that entertaining procedural digressions is counter-productive. In some sense this is universally true: a system which does not limit self-examination can at least in principle recover from arbitrary failures. Moreover, it offers the prospect of refining the rationality of the group, which in turn improves the group's ability to select and implement efficient structures, which closes a feedback loop whose limit may be an unusually effective group.

Homogeneity:

A homogeneous taskforce is composed of members who face similar questions in their own lives, and who are more likely to agree about which issues require discussion and which projects they could work profitably on. An inhomogeneous taskforce is composed of members with a greater variety of perspectives, who are more likely to be able have complementary information and to avoid failures. In general, I believe that working for the common good involves enough questions of general importance (ie, of importance to people in very different positions) that the benefits of inhomogeneity seem greater than the costs. 

In practice, this issue is probably forced for now. Whoever is interested enough to participate will participate (and should be encouraged to participate), until there is enough interest that groups can form selectively.

Atmosphere:

In principle the atmosphere of a community is difficult to control. But the content of discussion and structure of expectations prior to the first meeting have a significant effect on the atmosphere. Intuitively, I expect there is a significant risk of a group falling apart immediately for a variety of reasons: social incompatibility, apparent uselessness, inability to maintain initial enthusiasm based on unrealistic expectations, etc. Forcing even a tiny commmunity into existence is hard (though I suspect not impossible).

I think the most important part of the atmosphere of a community is its support for criticism, and willingness to submit beliefs to criticism. There is a sense (articulated by Orson Scott Card somewhere at some point) that you maintain status by never showing your full hand; by never admitting "That's it. That's all I have. Now you can help me decide whether I am right or wrong." This attitude is very dangerous coupled with normal status-seeking, because its not clear to me that it is possible to recover from it. I don't believe that having rational members is enough to avoid this failure.

I don't have any other observations, except that factors controlling atmosphere should be noted when trying to understand the effectiveness of particular efforts to start communities of any sort, even though such factors are difficult to measure or describe.

Finding People:

The internet is a good place to find people, but there is only a weak sense of personal responsibility throughout much of it, and committing to dealing with people you don't know well is hard/unwise. The real world is a much harder place to find people, but conversations in person quickly establish a sense of personal responsibility and can be used to easily estimate social compatibility. Most people are strangers, and the set of people who could possibly be convinced to work with a taskforce is extremely sparse. On the other hand, your chances of convincing an acquaintance to engage in an involved project with you seem to be way higher.

My hope is that LW is large enough, and unusual enough, that it may be possible to start something just by exchanging cheap talk here. At least, I think this is possible and therefore worth acting on, since alternative states of the world will require more time to get something like this rolling. Another approach is to use the internet to orchestrate low-key meetings, and then bootstrap up from modest personal engagement to something more involved. Another is to try and use the internet to develop a community which can better support/encourage the desired behavior. Of course there are approaches that don't go through the internet, but those approaches will be much more difficult and I would like to explore easy possibilities first.

Recovery from Failure:

I can basically guarantee that if anything comes of my desire, it will include at least one failure. The real cost of failure is extremely small. My fear, based on experience, is that every time an effort at social organization fails it significantly decreases enthusiasm for similar efforts in the future. My only response to this fear is: don't be too optimistic, and don't be too pessimistic. Don't stake too much of your hope on the next try, but don't assume the next try will fail just because the last one did. In short: be rational.


Conclusion:

There are more logistical issues, many reasons a taskforce might fail, and many reasons it might not be worth the effort. But I believe I can do much more good in the future than I have done in the past, and that part of that will involve more effectively exploiting the fact that I am not alone as a rationalist. Even if the only conclusion of a taskforce is to disband itself, I would like to give it a shot.

As groups succeed or fail, different answers to these questions can be tested. My initial impulse in favor of starting abstractly and self-modifying towards concreteness can be replaced by emulating the success of other groups. Of course, this is an optimistic vision: for now, I am focused on getting one group to work once.

I welcome thoughts on other high-level issues, criticism of my beliefs,  or (optimistically) discussions/prioritization of particular logistical issues. But right now I would mostly like to gauge interest. What arguments could convince you that such a taskforce would be useful / what uncertainties would have to be resolved? What arguments could convince you to participate? Under what conditions would you be likely to participate? Where do you live?

I am in Cambridge, am willing to travel anywhere in the Boston area, need no additional arguments to convince me that such a taskforce would be useful, and would participate in any group I thought had a reasonable chance of moderate success.

February 27 2011 Southern California Meetup

7 JenniferRM 24 February 2011 05:05AM

Procedural Knowledge Gaps

126 Alicorn 08 February 2011 03:17AM

I am beginning to suspect that it is surprisingly common for intelligent, competent adults to somehow make it through the world for a few decades while missing some ordinary skill, like mailing a physical letter, folding a fitted sheet, depositing a check, or reading a bus schedule.  Since these tasks are often presented atomically - or, worse, embedded implicitly into other instructions - and it is often possible to get around the need for them, this ignorance is not self-correcting.  One can Google "how to deposit a check" and similar phrases, but the sorts of instructions that crop up are often misleading, rely on entangled and potentially similarly-deficient knowledge to be understandable, or are not so much instructions as they are tips and tricks and warnings for people who already know the basic procedure.  Asking other people is more effective because they can respond to requests for clarification (and physically pointing at stuff is useful too), but embarrassing, since lacking these skills as an adult is stigmatized.  (They are rarely even considered skills by people who have had them for a while.)

This seems like a bad situation.  And - if I am correct and gaps like these are common - then it is something of a collective action problem to handle gap-filling without undue social drama.  Supposedly, we're good at collective action problems, us rationalists, right?  So I propose a thread for the purpose here, with the stipulation that all replies to gap announcements are to be constructive attempts at conveying the relevant procedural knowledge.  No asking "how did you manage to be X years old without knowing that?" - if the gap-haver wishes to volunteer the information, that is fine, but asking is to be considered poor form.

(And yes, I have one.  It's this: how in the world do people go about the supposedly atomic action of investing in the stock market?  Here I am, sitting at my computer, and suppose I want a share of Apple - there isn't a button that says "Buy Our Stock" on their website.  There goes my one idea.  Where do I go and what do I do there?)

January 2011 Southern California Meetup

8 JenniferRM 18 January 2011 04:50AM

There will be a meetup for Southern California this Sunday, January 23, 2011 at 4PM and running for three to five hours.  The meetup is happening at Marco's Trattoria.  The address is:

8200 Santa Monica Blvd
West Hollywood, CA 90046

If all the people (including guests and high end group estimates) show up we'll be at the limit of the space with 24 attendees.  Previous meetups had room for walk-ins and future meetups should as well, but this one is full.  If you didn't RSVP in time for this one but want to get an email reminder when the February meetup is scheduled send me a PM with contact info.

continue reading »

Levels of communication

52 Kaj_Sotala 23 March 2010 09:32PM

Communication fails when the participants in a conversation aren't talking about the same thing. This can be something as subtle as having slightly differing mappings of verbal space to conceptual space, or it can be a question of being on entirely different levels of conversation. There are at least four such levels: the level of facts, the level of status, the level of values, and the level of socialization. I suspect that many people with rationalist tendencies tend to operate primarily on the fact level and assume others to be doing so as well, which might lead to plenty of frustration.

The level of facts. This is the most straightforward one. When everyone is operating on the level of facts, they are detachedly trying to discover the truth about a certain subject. Pretty much nothing else than the facts matter.

The level of status. Probably the best way of explaining what happens when everyone is operating on the level of status is the following passage, originally found in Keith Johnstone's Impro

continue reading »

Blackmail, Nukes and the Prisoner's Dilemma

20 Stuart_Armstrong 10 March 2010 02:58PM

This example (and the whole method for modelling blackmail) are due to Eliezer. I have just recast them in my own words.

We join our friends, the Countess of Rectitude and Baron Chastity, in bed together. Having surmounted their recent difficulties (she paid him, by the way), they decide to relax with a good old game of prisoner's dilemma. The payoff matrix is as usual:

(Baron, Countess)
Cooperate
Defect
Cooperate
(3,3) (0,5)
Defect
(5,0) (1,1)

Were they both standard game theorists, they would both defect, and the payoff would be (1,1). But recall that the baron occupies an epistemic vantage over the countess. While the countess only gets to choose her own action, he can choose from among four more general tactics:

  1. (Countess C, Countess D)→(Baron D, Baron C)   "contrarian" : do the opposite of what she does
  2. (Countess C, Countess D)→(Baron C, Baron C)   "trusting soul" : always cooperate
  3. (Countess C, Countess D)→(Baron D, Baron D)   "bastard" : always defect
  4. (Countess C, Countess D)→(Baron C, Baron D)   "copycat" : do whatever she does

Recall that he counterfactually considers what the countess would do in each case, while assuming that the countess considers his decision a fixed fact about the universe. Were he to adopt the contrarian tactic, she would maximise her utility by defecting, giving a payoff of (0,5). Similarly, she would defect in both trusting soul and bastard, giving payoffs of (0,5) and (1,1) respectively. If he goes for copycat, on the other hand, she will cooperate, giving a payoff of (3,3).

Thus when one player occupies a superior epistemic vantage over the other, they can do better than standard game theorists, and manage to both cooperate.

"Isn't it wonderful," gushed the Countess, pocketing her 3 utilitons and lighting a cigarette, "how we can do such marvellously unexpected things when your position is over mine?"

continue reading »

The Blackmail Equation

13 Stuart_Armstrong 10 March 2010 02:46PM

This is Eliezer's model of blackmail in decision theory at the recent workshop at SIAI, filtered through my own understanding. Eliezer help and advice were much appreciated; any errors here-in are my own.

The mysterious stranger blackmailing the Countess of Rectitude over her extra-marital affair with Baron Chastity doesn't have to run a complicated algorithm. He simply has to credibly commit to the course of action:

"If you don't give me money, I will reveal your affair."

And then, generally, the Countess forks over the cash. Which means the blackmailer never does reveal the details of the affair, so that threat remains entirely counterfactual/hypothetical. Even if the blackmailer is Baron Chastity, and the revelation would be devastating for him as well, this makes no difference at all, as long as he can credibly commit to Z. In the world of perfect decision makers, there is no risk to doing so, because the Countess will hand over the money, so the Baron will not take the hit from the revelation.

Indeed, the baron could replace "I will reveal our affair" with Z="I will reveal our affair, then sell my children into slavery, kill my dogs, burn my palace, and donate my organs to medical science while boiling myself in burning tar" or even "I will reveal our affair, then turn on an unfriendly AI", and it would only matter if this changed his pre-commitment to Z. If the Baron can commit to counterfactually doing Z, then he never has to do Z (as the countess will pay him the hush money), so it doesn't matter how horrible the consequences of Z are to himself.

To get some numbers in this model, assume the countess can either pay up or not do so, and the baron can reveal the affair or keep silent. The payoff matrix could look something like this:

(Baron, Countess)
Pay
Not pay
Reveal
 (-90,-110) (-100,-100)
Silent
(10,-10) (0,0)

continue reading »

A Suite of Pragmatic Considerations in Favor of Niceness

82 Alicorn 05 January 2010 09:32PM

tl;dr: Sometimes, people don't try as hard as they could to be nice.  If being nice is not a terminal value for you, here are some other things to think about which might induce you to be nice anyway.

There is a prevailing ethos in communities similar to ours - atheistic, intellectual groupings, who congregate around a topic rather than simply to congregate - and this ethos says that it is not necessary to be nice.  I'm drawing on a commonsense notion of "niceness" here, which I hope won't confuse anyone (another feature of communities like this is that it's very easy to find people who claim to be confused by monosyllables).  I do not merely mean "polite", which can be superficially like niceness when the person to whom the politeness is directed is in earshot but tends to be far more superficial.  I claim that this ethos is mistaken and harmful.  In so claiming, I do not also claim that I am always perfectly nice; I claim merely that I and others have good reasons to try to be.

The dispensing with niceness probably springs in large part from an extreme rejection of the ad hominem fallacy and of emotionally-based reasoning.  Of course someone may be entirely miserable company and still have brilliant, cogent ideas; to reject communication with someone who just happens to be miserable company, in spite of their brilliant, cogent ideas, is to miss out on the (valuable) latter because of a silly emotional reaction to the (irrelevant) former.  Since the point of the community is ideas; and the person's ideas are good; and how much fun they are to be around is irrelevant - well, bringing up that they are just terribly mean seems trivial at best, and perhaps an invocation of the aforementioned fallacy.  We are here to talk about ideas!  (Interestingly, this same courtesy is rarely extended to appalling spelling.)

The ad hominem fallacy is a fallacy, so this is a useful norm up to a point, but not up to the point where people who are perfectly capable of being nice, or learning to be nice, neglect to do so because it's apparently been rendered locally worthless.  I submit that there are still good, pragmatic reasons to be nice, as follows.  (These are claims about how to behave around real human-type persons.  Many of them would likely be obsolete if we were all perfect Bayesians.)

continue reading »

View more: Next