Comment author: AnnaSalamon 16 January 2016 02:19:39AM 3 points [-]

It's true there are situations in which this isn't the case, but I think they're rare enough that it's worth acknowledging the value of hesitation in many cases and trying to be clear about distinguishing valid from invalid hesitation.

It seems to me that thinking through uncertainties and scenarios is often really really important, as is making specific safeguards that will help you if your model turns out to be wrong; but I claim that there is a different meaning of "hesitation" that is like "keeping most of my psyche in a state of roadblock while I kind-of hang out with my friend while also feeling anxious about my paper", or something, that is very different from actually concretely picturing the two scenarios, and figuring out how to create an outcome I'd like given both possibilities. I'm not expressing it well, but does the distinction I am trying to gesture at make sense?

Comment author: 27chaos 16 January 2016 11:28:30AM 0 points [-]

Yup.

Comment author: 27chaos 16 January 2016 12:45:44AM 1 point [-]

Either way, fullspeed was best. My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward. So, since I'm uncertain, I should go forward at half-speed!" But averages don't actually work that way.

Averages don't work that way because you did the math wrong: you should have stopped! I understand the point that you're trying to make with this post, but there are many cases in which uncertainty really does mean you should stop and think, or hedge your bets, rather than go full speed ahead. It's true there are situations in which this isn't the case, but I think they're rare enough that it's worth acknowledging the value of hesitation in many cases and trying to be clear about distinguishing valid from invalid hesitation.

Comment author: 27chaos 15 January 2016 05:32:26AM *  0 points [-]

It seems to me that we should be very liberal in this regard: biases which remain in the AIs model of SO+UO are likely to be minor biases (as major biases will have been stated by humans as things to avoid). These are biases so small that we're probably not aware of them. Compared with the possibility of losing something human-crucial we didn't think of explicitly stating, I'd say the case is strong to err on the size of increased complexity/more biases and preferences allowed. Essentially, we're unlikely to have missed some biases we'd really care about eliminating, but very likely to have missed some preference we'd really miss if it were gone.

You frame the issue as though the cost of being liberal is that we'll have more biases preventing us from achieving our preferences, but I think this understates the difficulty. Precisely because it's difficult to distinguish biases from preferences, accidentally preserving unnecessary biases is equivalent to being liberal and unnecessarily adding entirely new values to human beings. We're not merely faced with biases that would function as instrumental difficulties to achieving our goals, but with direct end-point changes to those goals.

In response to LessWrong 2.0
Comment author: 27chaos 06 December 2015 01:00:55AM 3 points [-]

I like rationality quotes, so whatever happens I hope that stays alive in some form. Maybe it could move to /r/slatestarcodex.

Comment author: Vaniver 04 December 2015 12:58:17AM *  16 points [-]

I have had on the back burner for... probably six months now a post on why I am turned off by / leery about EA, despite donating 10% of my income to charity, caring about x-risk, and so on. One of the reasons that post has stayed on the back burner is "Why Our Kind Can't Cooperate" plus "The Virtue of Silence"--given how few of the issues are methodological, better to just silently let EA be, or swallow my disagreements and endorse it, than spell out my disagreements and expect them to be taken seriously.

But this is suggesting to me that I probably should put them forward, in order to make this conversation easier if nothing else.

In response to comment by Vaniver on LessWrong 2.0
Comment author: 27chaos 06 December 2015 12:56:33AM 2 points [-]

Please do.

Comment author: ingres 03 December 2015 07:24:50AM 13 points [-]

Stretch goal: bake EA principles in from the start.

This would be a huge turnoff for many people, including myself.

In response to comment by ingres on LessWrong 2.0
Comment author: 27chaos 06 December 2015 12:55:48AM *  1 point [-]

Same. I like my arguments modular. I say this despite liking EA a lot.

Comment author: 27chaos 02 December 2015 06:50:59PM 11 points [-]

The key to avoiding rivalries is to introduce a new pole, which mediates your relationship to the antagonist. For me this pole is often Scripture. I renounce my claim to be thoroughly aligned with the pole of Scripture and refocus my attention on it, using it to mediate my relationship with the antagonistic party. Alternatively, I focus on a non-aggressive third party. You may notice that this same pattern is observed in the UK parliamentary system of the House of Commons, for instance. MPs don’t directly address each other: all of their interactions are mediated by and addressed to a non-aggressive, non-partisan third party – the Speaker. This serves to dampen antagonisms and decrease the tendency to fall into rivalry. In a conversation where such a ‘Speaker’ figure is lacking, you need mentally to establish and situate yourself relative to one. For me, the peaceful lurker or eavesdropper, Christ, or the Scripture can all serve in such a role. As I engage directly with this peaceful party and my relationship with the aggressive party becomes mediated by this party, I find it so much easier to retain my calm.

Alastair Roberts

Comment author: Jurily 17 November 2015 01:54:54PM -1 points [-]

The claim is not observable in any way and offers no testable predictions or anything that even remotely sounds like advice. It's unprovable because it doesn't talk about objective reality.

Comment author: 27chaos 18 November 2015 06:06:10AM 0 points [-]

There's a sequence about how the scientific method is less powerful than Bayesian reasoning that you should probably read.

Comment author: Silver_Swift 05 November 2015 12:49:41PM 14 points [-]

Similarly:

I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive.

Randal Munroe

Comment author: 27chaos 16 November 2015 11:17:39PM *  5 points [-]

Maybe hubris means not knowing the capabilities of one's tools.

Edit: I've just realized that in that sense, underestimating the capabilities of one's tools and refusing to try would also be a sin. If you believe that Fate itself is opposed to any attempt by men to fly, that's more arrogant a belief than thinking Fate is indifferent. I like this implication.

Comment author: Lumifer 24 September 2015 02:39:55PM 1 point [-]

I would probably argue that the complexity of explanations should match the complexity of the phenomenon you're trying to describe.

Comment author: 27chaos 05 November 2015 09:58:43PM *  0 points [-]

After a couple months more thought, I still feel as though there should be some more general sense in which simplicity is better. Maybe because it's easier to find simple explanations that approximately match complex truths than to find complex explanations that approximately match simple truths, so even when you're dealing with a domain filled with complex phenomena it's better to use simplicity. On the other hand, perhaps the notion that approximations matter or can be meaningfully compared across domains of different complexity is begging the question somehow.

View more: Prev | Next