Comment author: Glen 08 February 2016 08:31:38PM 5 points [-]

That's not even true, though. If you are kidnapped and then tortured, you are not remotely in control of your own happiness, just to take the most obvious extreme answer. Even for more mundane situations, people can be trapped in terrible situations, where cruel people have power over them. If you are working at a minimum wage job with bills coming seemingly every day and which you only overcome by working 18 hour days before collapsing exhausted and doing it all again in the morning, there is very little you can do about it. Now if one of the supervisors at one of your jobs is a petty tyrant who makes you miserable, what choice do you have that would increase your happiness?

I see what the quote is trying to say, as a call to action to change your own life, but it simply isn't true. It also fails the false wisdom reversal test, in that a quote saying "You have no true control over your own happiness; therefore, you must accept your lot in life with all the grace you can muster" sounds just as deep and helpful.

Comment author: 27chaos 09 February 2016 12:10:22AM 3 points [-]

I am somewhat uncertain about whether people who are kidnapped and tortured are in control of their happiness. I know there are at least a few people who've been in those situations or similar ones, like the Holocaust, who report that they retained some control over their own thoughts and perspective and this was a source of comfort and strength to them. I think it is possible that people who are tortured are in control of their own happiness, but they generally tend to make the choice to break.

One example that comes up in discussions on this is medical depression, which I have. From introspection, it feels like it is both true that I have control over my happiness and that it is not true that I have control over my happiness. I can recall occasions on which I have consciously chosen to lie in bed and be unhappy, and I can also recall occasions on which I have consciously chosen to uproot myself from misery. However, there are also occasions where I've attempted to do this but failed. I think the answer to our dilemma lies in compatibilism: we are in control in the sense that what happens inside our heads matters, but not in the sense that we can transcend our physical limitations and become omnipotent.

Also, it was listed as an instrumental rationality quote.

All of that said, I downvoted the original comment. While I think it is a defensible point of view, I want rationality quotes that are insightful and compelling, not ones that regurgitate conventional wisdom which some people will automatically believe while others will not.

Comment author: 27chaos 07 February 2016 10:38:29PM 0 points [-]

The actual developments of society during this period were determined, not by a battle of conflicting ideals, but by the contrast between an existing state of affairs and that one ideal of a possible future society which the socialists alone held up before the public. Very few of the other programs which offered themselves provided genuine alternatives. Most of them were mere compromises or half-way houses between the more extreme types of socialism and the existing order. All that was needed to make almost any socialist proposal appear reasonable to these "judicious" minds who were constitutionally convinced that the truth must always lie in the middle between the extremes, was for someone to advocate a sufficiently more extreme proposal. There seemed to exist only one direction in which we could move, and the only question seemed to be how fast and how far the movement should proceed.

FA Hayek, Intellectuals and Socialism.

The warning against the golden mean fallacy is useful but standard, what I like best about this quote is that it brought to my attention the importance of constructive imagination in political reforms. I think this implies we'll get more and better thinking at the margins of policy if there are many different views about what policy's grand goals ought to be.

Comment author: AnnaSalamon 16 January 2016 02:19:39AM 3 points [-]

It's true there are situations in which this isn't the case, but I think they're rare enough that it's worth acknowledging the value of hesitation in many cases and trying to be clear about distinguishing valid from invalid hesitation.

It seems to me that thinking through uncertainties and scenarios is often really really important, as is making specific safeguards that will help you if your model turns out to be wrong; but I claim that there is a different meaning of "hesitation" that is like "keeping most of my psyche in a state of roadblock while I kind-of hang out with my friend while also feeling anxious about my paper", or something, that is very different from actually concretely picturing the two scenarios, and figuring out how to create an outcome I'd like given both possibilities. I'm not expressing it well, but does the distinction I am trying to gesture at make sense?

Comment author: 27chaos 16 January 2016 11:28:30AM 0 points [-]

Yup.

Comment author: 27chaos 16 January 2016 12:45:44AM 1 point [-]

Either way, fullspeed was best. My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward. So, since I'm uncertain, I should go forward at half-speed!" But averages don't actually work that way.

Averages don't work that way because you did the math wrong: you should have stopped! I understand the point that you're trying to make with this post, but there are many cases in which uncertainty really does mean you should stop and think, or hedge your bets, rather than go full speed ahead. It's true there are situations in which this isn't the case, but I think they're rare enough that it's worth acknowledging the value of hesitation in many cases and trying to be clear about distinguishing valid from invalid hesitation.

Comment author: 27chaos 15 January 2016 05:32:26AM *  0 points [-]

It seems to me that we should be very liberal in this regard: biases which remain in the AIs model of SO+UO are likely to be minor biases (as major biases will have been stated by humans as things to avoid). These are biases so small that we're probably not aware of them. Compared with the possibility of losing something human-crucial we didn't think of explicitly stating, I'd say the case is strong to err on the size of increased complexity/more biases and preferences allowed. Essentially, we're unlikely to have missed some biases we'd really care about eliminating, but very likely to have missed some preference we'd really miss if it were gone.

You frame the issue as though the cost of being liberal is that we'll have more biases preventing us from achieving our preferences, but I think this understates the difficulty. Precisely because it's difficult to distinguish biases from preferences, accidentally preserving unnecessary biases is equivalent to being liberal and unnecessarily adding entirely new values to human beings. We're not merely faced with biases that would function as instrumental difficulties to achieving our goals, but with direct end-point changes to those goals.

In response to LessWrong 2.0
Comment author: 27chaos 06 December 2015 01:00:55AM 3 points [-]

I like rationality quotes, so whatever happens I hope that stays alive in some form. Maybe it could move to /r/slatestarcodex.

Comment author: Vaniver 04 December 2015 12:58:17AM *  16 points [-]

I have had on the back burner for... probably six months now a post on why I am turned off by / leery about EA, despite donating 10% of my income to charity, caring about x-risk, and so on. One of the reasons that post has stayed on the back burner is "Why Our Kind Can't Cooperate" plus "The Virtue of Silence"--given how few of the issues are methodological, better to just silently let EA be, or swallow my disagreements and endorse it, than spell out my disagreements and expect them to be taken seriously.

But this is suggesting to me that I probably should put them forward, in order to make this conversation easier if nothing else.

In response to comment by Vaniver on LessWrong 2.0
Comment author: 27chaos 06 December 2015 12:56:33AM 2 points [-]

Please do.

Comment author: ingres 03 December 2015 07:24:50AM 13 points [-]

Stretch goal: bake EA principles in from the start.

This would be a huge turnoff for many people, including myself.

In response to comment by ingres on LessWrong 2.0
Comment author: 27chaos 06 December 2015 12:55:48AM *  1 point [-]

Same. I like my arguments modular. I say this despite liking EA a lot.

Comment author: 27chaos 02 December 2015 06:50:59PM 11 points [-]

The key to avoiding rivalries is to introduce a new pole, which mediates your relationship to the antagonist. For me this pole is often Scripture. I renounce my claim to be thoroughly aligned with the pole of Scripture and refocus my attention on it, using it to mediate my relationship with the antagonistic party. Alternatively, I focus on a non-aggressive third party. You may notice that this same pattern is observed in the UK parliamentary system of the House of Commons, for instance. MPs don’t directly address each other: all of their interactions are mediated by and addressed to a non-aggressive, non-partisan third party – the Speaker. This serves to dampen antagonisms and decrease the tendency to fall into rivalry. In a conversation where such a ‘Speaker’ figure is lacking, you need mentally to establish and situate yourself relative to one. For me, the peaceful lurker or eavesdropper, Christ, or the Scripture can all serve in such a role. As I engage directly with this peaceful party and my relationship with the aggressive party becomes mediated by this party, I find it so much easier to retain my calm.

Alastair Roberts

Comment author: Jurily 17 November 2015 01:54:54PM -1 points [-]

The claim is not observable in any way and offers no testable predictions or anything that even remotely sounds like advice. It's unprovable because it doesn't talk about objective reality.

Comment author: 27chaos 18 November 2015 06:06:10AM 0 points [-]

There's a sequence about how the scientific method is less powerful than Bayesian reasoning that you should probably read.

View more: Prev | Next