Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gjm 13 February 2017 03:17:34AM 1 point [-]

It's not clear to me that a lack of fixed rules has that consequence. Why do you think that?

Comment author: peter_hurford 14 February 2017 04:37:45AM 0 points [-]

It seems to have had consequences for at least one poster (namely, the OP).

Comment author: ChristianKl 02 February 2017 07:59:37PM 1 point [-]

There are no fixed rules. There are values and value judgments. Don't try to optimize for rules but for what brings LW forward.

Comment author: peter_hurford 11 February 2017 06:01:42PM 0 points [-]

I think we should change this, because a lack of fixed rules makes LW pretty hard to use and helps keep it dead.

Comment author: peter_hurford 29 December 2016 06:48:32PM 0 points [-]

This is pretty cool -- I like the write-up. I don't mean to pry into your life, but I would find it interesting to see an example of how you answer these questions. It would help me internalize the process more.

Comment author: peter_hurford 29 December 2016 06:47:15PM 0 points [-]

What category does writing posts go under? I'm impressed you can do a day job, write posts, and still have a lot of messing around time! :)

Comment author: Elo 29 December 2016 12:21:06AM 0 points [-]

fixed

Comment author: peter_hurford 29 December 2016 06:46:04PM 0 points [-]

10:20-1 work meeting (1hr40mins)

Still nitpicking, 10:20-1 is 2hr40min.

Comment author: Elo 29 December 2016 12:32:17AM 1 point [-]

The trouble with this information (and exercises of this type) is that you always had that information available to you, but never really on the one page laid out obviously. There is an insight to be gained in just being able to do that.

That doesn't answer the question fully. This information helps to inform other tasks and processes for example in the "Try this" section of this post http://bearlamp.com.au/exploration-exploitation-problems/

The third thing it does is help defeat a s1/s2 incongruity. you System2 know that these are all the tasks you spend your time on, so in order to System1! change your mind on what you want to do in your time you inform your system 1 that there is no time that has sneakily "escaped" your view, fallen down the back of the couch, or somehow there is "more time" other than what you already have. This is what I consider the most powerful insight of this process.

This is hopefully also explained in the next post in the series - http://bearlamp.com.au/bargaining-trade-offs-in-your-brain/ (this paragraph was added to that post because I really liked the way I described it to you :) Thanks! )

Comment author: peter_hurford 29 December 2016 06:34:25PM 0 points [-]

Ok, that's pretty cool. Thanks!

Comment author: peter_hurford 28 December 2016 04:12:09AM 0 points [-]

I'd be curious to hear more about what you did with this information once you had it.

Comment author: peter_hurford 28 December 2016 04:10:51AM *  0 points [-]

Nitpick:

10:20-1 work meeting (40mins)

1-1:30 lunch (30mins)

You have a hole in your schedule with 2hrs unaccounted for.

Comment author: Benquo 29 November 2016 08:42:14PM *  3 points [-]

Can you say more about the perceived smugness? It seems to me like a straightforward account of the obvious limitation to GiveWell's scope. I only didn't upvote because it seemed too obvious.

Comment author: peter_hurford 30 November 2016 06:55:10PM 6 points [-]

To me, the tone came across as "Ho ho ho, look at those stupid GiveWell people who have never heard of the streetlight effect! They're blinded by their own metrics and can't even see how awesome MIRI is!" when there's no interaction or acknowledgement with (a) materials from GiveWell that address the streetlight effect argument, (b) OpenPhil, or (c) how to actually start to resolve the problem (or even that the problem is particularly hard).

I don't want to have a high demand for rigor, especially for Discussion-type posts -- for me, it's more about the lack of humility.

Comment author: peter_hurford 29 November 2016 08:02:42PM 6 points [-]

I downvoted because this feels overly smug to me. I think it's a legitimate issue, but GiveWell has made many arguments for why they do what they do, and OpenPhil has made some progress on figuring out how to evaluate AI organizations. Sure, many fields might very well be vastly more fruitful, but they also might not. How do we know which ones?

View more: Next