DSimon comments on Philosophy: A Diseased Discipline - Less Wrong

88 Post author: lukeprog 28 March 2011 07:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (425)

You are viewing a single comment's thread. Show more comments above.

Comment author: DSimon 29 November 2011 11:39:08PM *  1 point [-]

Firstly, this is an argument for studying "human techniques", and devising algorithmic implementations, and not an argument for abandoning these techniques.

Indeed, I should have been more specific; not all processes used in AI need to be analogous to humans, of course. All I meant was that it is very easy, when trying to provide a complete spec of a human process, to accidentally lean on other human mental processes that seem on zeroth-glance to be "obvious". It's hard to spot those mistakes without an outside view.

Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.

To a degree, though I suspect that even in an uploaded mind it would be tricky to isolate and copy-out individual techniques, since they're all likely to be non-locally-cohesive and heavily interdependent.

Comment author: Bugmaster 30 November 2011 12:37:07AM *  0 points [-]

It's hard to spot those mistakes without an outside view.

Right, that makes sense.

To a degree, though I suspect that even in an uploaded mind it would be tricky to isolate and copy-out individual techniques...

True, but I wasn't thinking of using an uploaded mind to extract and study those ideas, but simply to plug the mind into your overall architecture and treat it like a black box that gives you the right answers, somehow. It's a poor solution, but it's better than nothing -- assuming that the Singularity is imminent and we're all about to be nano-recycled into quantum computronium, unless we manage to turn the AI into an FAI in the next 72 hours.