New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 2:51 PM

Is there any particular aspect of this that is most interesting/relevant to LW?

Seconded. I was familiar with that web site already, and for anyone interested in interaction design its absolutely worth reading, but I don't see any specific rationality relevance.

Check on whether the usual way things are done might be leaving something important out?

More than most of us would like to admit... This rant explains one of the things we must pay attention to if we want to do effective intelligence amplification. And intelligence amplififcation is a thing that moves you one step ahead - either in building AGI or in coping with our technical/scientific/conceptual level being insufficient.

I think it was approximately as relevant as AI should be (which is not very, technically, but it's inspiring and there are ample opportunities to tie in rationality lessons - in this case, how to figure out what your terminal goal actually should be and make long term plans around it).

Whether it was relevant, I was glad it was linked.

TED talk about brains having evolved to control movement, and I was planning to post it to this thread even before Bayes got mentioned.