Will_Newsome comments on Hard problem? Hack away at the edges. - Less Wrong

45 Post author: lukeprog 26 September 2011 10:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 26 September 2011 06:26:23PM *  3 points [-]

Machine learning (in particular, graphical models), more general AI, philosophy, game theory, algorithmic complexity, cognitive science, neuroscience seem to be mostly useless (beyond the basics) for attacking friendliness content problem. Pure mathematics seems potentially useful.

Comment author: Will_Newsome 26 September 2011 09:58:24PM *  0 points [-]

Agreed but would add algorithmic information theory, deep theoretical computer science, and maybe quantum information theory. There are some interesting questions about hypercomputation, getting information from context, and concrete semi-"physical" AI coordination problems. (Also reversible computing is just trippy as hell. Intuitions, especially "moral" intuitions, gawk at it.) These are of course secondary to study of updateless-like decision theories.

Comment author: wedrifid 27 September 2011 12:52:48AM 1 point [-]

Also reversible computing is just trippy as hell. Intuitions, especially "moral" intuitions, gawk at it.

They do? Why? I haven't experienced moral trippiness myself. This may be because I haven't considered the same things you have or because my intuitions are eccentric. (Assume I mean 'eccentric in a different way to how your moral intuitions are eccentric' or not depending on whether you prefer to be seen as having typical moral intuitions or atypical ones.)