Comment author: RowanE 16 July 2015 10:30:35AM 2 points [-]

I think the word "reasonable" is used enough as an applause light rather than an actual descriptor that it should probably be put in "scare quotes" to defuse it through most of this essay.

Comment author: Satoshi_Nakamoto 16 July 2015 12:13:10PM 0 points [-]

Done. Thanks for the suggestion.

Comment author: abramdemski 15 July 2015 07:34:59PM *  1 point [-]

In hindsight, writing a post about Rational vs Reasonable has the unfortunate effect of causing people to ask which is better and how to choose between them, as well as risking causing people to accuse people of being reasonable rather than rational and things of that nature.

These are not good outcomes.

There's a very general issue with "X vs Y" posts, which is that they make the distinction look contentious rather than merely useful. Brienne wrote about this in connection with her Ask Culture vs Guess Culture. A similar failure mode occurs when people debate epistemic vs instrumental rationality.

As nyralech replied, the answer is to use what best serves your goals. The two are not opposed; nor are they allied; nor is it a balancing act between them. Where being reasonable does not serve rationality, the Way opposes your reasonableness; where being reasonable does serve rationality the Way opposes your unreasonableness. "The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means." etc.

Comment author: Satoshi_Nakamoto 16 July 2015 03:25:21AM 1 point [-]

A wrote a post based on this, see The Just-Be-Reasonable Predicament. The just-be-reasonable predicament occurs when in order to be seen as being reasonable you must do something irrational or non-optimal.

The Just-Be-Reasonable Predicament

5 Satoshi_Nakamoto 16 July 2015 03:17AM

If people don't see you as being “reasonable”, then you are likely to have troublesome interactions with them. Therefore, it is often valuable to be seen as “reasonable”. Reasonableness is a general perception that is determined by the social context and norms. It includes, but is not limited to, being seen as fair, sensible and socially cooperative. In summary, we can describe it as being noticeably rational in socially acceptable ways. What is “reasonable” and what is rational often converges, but it is important to note that they can also diverge and be different. For example, it was deemed “unreasonable” to free African-Americans from slavery because slavery was deemed necessary for the economy of the South.

 

The just-be-reasonable predicament occurs when you are chastised for doing something that you believe to be more rational and/or optimal than the norm or what is expected or desired. The chastiser has either: not considered, cannot fathom or does not care that what you are doing or want to do might be more rational and/or optimal than what is the default course of action. The predicament is similar to the one described in lonely dissent in that you must choose between making what you to believe to be the most rational and/or optimal course of action and the one that will be meet with the least amount of social disapproval. 

 

An example of this predicament is when you are playing a game with a scrub (a player who is handicapped by self-imposed rules that the game knows nothing about). The scrub might criticise for continuing to use the best strategy that you are aware of, but that they thinks is cheap. If you try to argue that a strategy is a strategy, then the argument is likely to end with the scrub getting angry and saying the equivalent of “just be reasonable”, which basically means: “why can’t you just follow what I see as the rules and the way things should be done?” When you encounter this predicament, you need to weigh up the costs of leaving the way or choosing a non-optimal action vs. facing potential social disapproval. The way opposes being “reasonable” when it is not aligned with being rational. In the scrub situation, the main benefit of being “reasonable” is that you are less likely to annoy the scrub and the main cost is that you are giving up a way to improve for both you and the scrub. The scrub will never learn how to counter the “cheap” strategy and you won’t be looking for other strategies as you know you can always just fall back to the “cheap” strategy if you want to win.

 

In general, you have three choices for how to deal with this predicament: you can be “reasonable”, explain yourself or try to ignore it. Ignoring it means that you continue or go ahead with the ration/optimal course of action that you had planned and that you also to change the conversation or situation so that you don't continue getting chastised. Which choice you should make depends on thecorrigibility and state of mind of the person that you need to explain yourself to as well as how much being “reasonable” differs from being rational. If we reconsider the scrub situation, then we can think of times when you should, or at least most people would, avoid the so called “cheap” strategy. Maybe, it is a bug in the game or it’s overpowered or your goal is fun rather than becoming better at the game. (Note, though, that becoming better at a game often makes it more fun).

 

The just-be-reasonable predicament is especially troubling because, like with the counter man syndrome, repeated erroneous thinking can become embedded into how you reason. In this case, repeated acquiescence can lead to embedding irrational and/or non-optimal ways of thinking into your thought processes.

 

If you continually encounter the just-be-reasonable predicament, then it indicates that your values are out of alignment with the person that you are dealing with. That is, they don’t value rationality, but just want you to do things in the way that they expect and want. Trying to get them to adopt a more rational way of doing things will often be a hard task because it involves having to convince them that their current paradigm from which they are deriving their beliefs as to what is “reasonable” is non-optimal.


Situations involving this predicament come in four main varieties:

  • You actually should just be “reasonable” – this occurs when you are being un”reasonable” not because the most rational or optimal thing is opposed to what is currently considered “reasonable”, but because you are being irrational. If this is the case, then make sure that you don’t try to rationalize and instead just be “reasonable” or try to ignore the situation so that you can think about it later when you are in a better state of mind.
  • Someone wants you to be “reasonable”, but hasn’t really thought about or cares about whether this is rational – this might occur when someone is angry at you because you are not following what they think is the right way to do things. It is important in this situation to not use the predicament as a way of avoiding thoughts about how you might be wrong or how the situation might be like from the other person’s perspective. This is important because, ultimately, you want to change the other person’s opinion about what is “reasonable” so that it matches up more with what is rational. To do this well you often need to be empathetic, understanding and strategic. You need to be strategic because sometimes you may need to ignore the situation or be what they think is “reasonable” so that you can reapproach the topic later without it being contaminated with negative valence. A good idea if you want to avoid making the other person feel like you are imposing is to get them to agree to try out your more rational method on a trial basis. This is also useful for two other reasons: what you think is more rational may turn out not to be and the “reasonable” way of doing things, on reflection, may turn out to be more rational than you think. Something additional to consider is that everyone has different dispositions, propensities and tendencies and what might be the most optimal strategy for you might not be for someone else. If this is the case, then don’t try to change their strategy, but just try to explain why you want to use yours.
  • Someone is telling you to be “reasonable” as a power play or as a method of control – this situation happens when someone is using their power to make you follow their way of doing things. This situation requires a different tact than the last one because your strategies to explain yourself probably won’t work. This is because being told to “just be “reasonable”” is a method that they are using to put you in your place. The other person is not interested in whether the “reasonable” thing is actually rational. They just want you to do something that benefits them. This kind of situation is tough to deal with. You may need to ignore and avoid them or if you do try to explain yourself make sure that you get the support of others first.
  • You don’t want to explain yourself – sometimes we notice that what people think is “reasonable” is not actually rational, but we do the “reasonable” thing anyway because the effort or potential cost involved with explaining yourself is considered to be too high. In this case, you either have to be “reasonable” or try to avoid the issue. Please note that this solution is not optimal because avoiding something when you don’t have evidence that it will go away is a choice to reface the same or worse situation in the future and accepting an unsavoury situation in resignation is letting fear control and limit you.

If you encounter the just-be-reasonable predicament, I recommend running through the below process:  

 

Some other types of this predicament would be “just do as you’re told”, “why can’t you just conform to my belief of what is the best course of action for you here” and any other type of social disapproval, implicit or explicit, that you get from doing what is rational or optimal rather than what is expected or the default.

Comment author: Satoshi_Nakamoto 13 July 2015 10:27:00AM *  1 point [-]

Is this a decent summary of what you mean by 'reasonable': noticeably rational in socially acceptable ways, i.e. you use reasons and arguments that are in accordance with group norms?

A reasonable person:

  • can explain their reasoning
  • is seen as someone who will update their beliefs based on socially acceptable evidence
  • is seen to act in accordance with social norms even when the norms are irrational. This means that their behaviour and reasoning are seen as socially acceptable and/or praiseworthy
Comment author: turchin 13 June 2015 10:03:08AM 0 points [-]

Sent 150 USD to Against Malaria foundation.

The idea of dumbing people is also present in Bad plan section, "limitation of human or collective intelligence"... But the main idea of preventing human extinction is, by definition to ensure that at least several examples of Homo sapienses are still alive in any given point of time. It is not the best possible definition. It should also include posthumans if they based on humans and share a lot of their properties (and as Bostrom said: could realise full human potential). In fact, we can't said what is really good before we solve Friendly AI problem. And if we know what is good, we could also said what is worst outcome, and so constitute existential catastrophe. But real catastrophe which could happen in 21 century is far from such sophisticated problems of determination ultimate good, human nature and full human potential. It is clear visible physical process of destruction.

There are some ideas of down to top solving problems of control, like idea of transparent society by David Brin, where vigilants will scan the web and video sensors searching for terrorists. So it would be not hierarchical control but net based, pr peer to peer.

I like two extra boxes, but for now I already spent my prize budget two times, which unexpectedly put me in controversial situation: as author of the map I want to make the best and most inclusive map, but as a owner of prize fund (which I pay from personal money earned selling art) I feel my self more screwy :)

Comment author: Satoshi_Nakamoto 14 June 2015 07:35:33AM 0 points [-]

Don’t worry about the money. Just like the comments if they are useful. In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past. Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.

Comment author: turchin 13 June 2015 05:59:31PM *  0 points [-]

A question: is it possible to create risk control system, which is not based on centralized power, like bitcoin is not based on central banking?

For example: local police could handle local crime and terrorists; local health authorities could find and prevent disease spread. If we have many x-risks peers, they could control their neighborhood in their professional space.

Counter example: how it could help in situations like ISIS or other rogue state, which is going (may be) to create a doomsday machine or virus which will be used to blackmail or exterminate other countries?

Comment author: Satoshi_Nakamoto 14 June 2015 07:34:40AM 0 points [-]

bitcoinis an electronic payment system based on cryptographic proof instead of trust. I think the big difference between it and the risk control system is the need for enforcement i.e. changing what other people can and can’t do. There seems to be two components to the risk control system: prediction of what should be researched and enforcement of this. The prediction component doesn’t need to come from a centralised power. It could just come from the scientific community. I would think that the enforcement would need to come from a centralised power. I guess that there does need to be a way to stop the centralized power causing X-risks. Perhaps, this could come from a localised and distributed effort. Maybe, something like a better version of anonymous.

Comment author: turchin 12 June 2015 07:44:19PM *  1 point [-]

I accepted your idea about replacing the word “robust" and will award the prize for it.

The main idea of this roadmap is to escape availability bias by listing all known ideas for x-risk prevention. This map will be accompanied by the map of all known x-risks which is ready and will be published soon. More than 100 x-risks have been identified and evaluated.

The idea that some of plans create their own risks is represented in this map with red boxes below plan A1.

But it may be possible to create completely different future risks and prevention map using system approach, or something like a scenarios tree.

Yes, each plan is better to contain specific risks. A1 is better to contain biotech and nanotech risks, A2 is better for UFAI, A3 for nuclear war and biotech an so on. So another map may be useful to correspond risks and prevention methods.

Timeline was already partly replaced with "steps", as was already suggested by "elo" and he was awarded for it.

Phil Torres shows that Bostroms classification of x-risks is not as good as it seems to be, in: http://ieet.org/index.php/IEET/more/torres20150121 So I prefer the notion of "human extinction risks" as more clear.

I still don't know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1.

In plans: 1. Is not "voluntary or forced devolution" the same as "ludism" and "relinquishment of dangerous science" which is already in the plan?

  1. The idea of uploding was already suggested here in the form of "migrating into simulation" and was awarded.

  2. I think that "some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware" is basically the same idea as "smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)", but your wording is excellent.

I think I should accept "dramatic social changes”, as it could include many interesting but different topics: demise of capitalism, hipster revolution, internet connectivity, global village, dissolving of national states. I got many suggestions in this line and I could unite them under this topic.

Do you mean METI - messaging to stars? Yes, it dangerous, and we should do it only if everything else fails. That is why I put in into plan C. But by the way SETI is even more dangerous as we could download decryption of alien AI. I have an article about it here: http://lesswrong.com/lw/gzv/risks_of_downloading_alien_ai_via_seti_search/

Thank for your suggestions which were the first ones which imply creation of the map on completely different principles.

So in total for now I suggest you 2 award and one from Romashka, total 150 usd. Your username suggests me that you would prefer to keep anonymity, so I could send money to a charity of your choice.

Comment author: Satoshi_Nakamoto 13 June 2015 05:18:02AM *  1 point [-]

In plans: 1. Is not "voluntary or forced devolution" the same as "ludism" and "relinquishment of dangerous science" which is already in the plan?

I was thinking more along the lines of restricting the chance for divergence in the human species. I guess I am not really sure what is it that you are trying to preserve. What do you take to be humanness? Technological advances may allow us to alter ourselves so substantially that we become post-human or no longer human. This could be for example from cybernetics or genetic engineering. "ludism" and "relinquishment of dangerous science" is a way to restrict what technologies we use, but note that we are still capable of using and creating these technologies. Devolution, perhaps there is a better word for it, would be something like the dumbing down of all or most humans so that they are no longer capable of using or creating the technologies that could make them less purely human.

I think that "some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware" is basically the same idea as "smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)", but your wording is excellent.

Yes you are right. I guess I was more implying man-made catastrophes which are created in order to cause a paradigmatic change rather than natural ones.

I still don't know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1.

I'm not sure either. I would think you could do it by changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Perhaps, this is just AI and improved computational modelling. This idea of needing control of the world seems extremely dangerous to me. Although, I suppose a top-down approach could solve the problems. I think that you should also think about what a good bottom-up approach would be. How do we make local communities and societies more resilient, economical and capable of facing potential X-risks.

In survive the catastrophe I would add two extra boxes:

  • Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. For example, with pandemics you could: improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeasure

  • Increase time available for preparation by improving monitoring and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and responded to more quickly

I could send money to a charity of your choice.

Send it to one of the charities here.

Comment author: Satoshi_Nakamoto 12 June 2015 02:39:09PM *  2 points [-]

I would use the word resilient rather than robust.

  • Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.

  • Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.

I think that it is a better idea to think about this from a system perspective rather than the specific X-risks or plans that we know about or think are cool. We want to avoid the availability bias. I would assume that there are more X-risks and plans that we are unaware of then we are aware of.

I recommend adding in the risks and relating them to the plans as most of your plans if they fail would lead to other risks. I would do this in a generic way. An example to demonstrate what I am talking about is: with a risk tragedy of the commons and a plan to create a more capable type of intelligent life form that will uphold, improve and maintain the interests of humanity. This could be done by genetic engineering and AI to create new life forms. And, Nanotechnology and biotechnology could be used to change existing humans. The potential risk of this plan is that it leads to the creation of other intelligent species that will inevitably compete with humans.

One more recommendation is to remove the time line from the road map and just have the risks and plans. The timeline would be useful in the explanation text you are creating. I like this categorisation of X risks:

  • Bangs (extinction) – Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.

  • Crunches (permanent stagnation) – The potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.

  • Shrieks (flawed realization) – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.

  • Whimpers(subsequent ruination) – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.

I don’t want this post to be too long, so I have just listed the common systems problems below:

  • Policy Resistance – Fixes that Fail

  • Tragedy of the Commons

  • Drift to Low Performance

  • Escalation

  • Success to the Successful

  • Shifting the Burden to the Intervenor—Addiction

  • Rule Beating

  • Seeking the Wrong Goal

  • Limits to Growth

Four additional plans are:

  1. (in Controlled regression) voluntary or forced devolution

  2. uploading human consciousness into a super computer

  3. some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware

  4. dramatic societal changes to avoid some existential risks like the over use of resources. An example of this is in the book: The world inside.

You talk about being saved by non-human intelligence, but it is also possible that SETI could actually cause hostile aliens to find us. A potential plan might be to stop SETI and try to hide. The opposite plan (seeking out aliens) seems as plausible though.

View more: Prev