TimS comments on Philosophy: A Diseased Discipline - Less Wrong

88 Post author: lukeprog 28 March 2011 07:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (425)

You are viewing a single comment's thread. Show more comments above.

Comment author: Bugmaster 28 November 2011 10:30:08PM 2 points [-]

Think like a cognitive scientist and AI programmer.

Is it possible to think "like an AI programmer" without being an AI programmer ? If the answer is "no", as I suspect it is, then doesn't this piece of advice basically say, "don't be a philosopher, be an AI programmer instead" ? If so, then it directly contradicts your point that "philosophy is not useless".

To put it in a slightly different way, is creating FAI primarily a philosophical challenge, or an engineering challenge ?

Comment author: TimS 29 November 2011 03:30:03AM 2 points [-]

Creating AI is an engineering challenge. Making FAI requires an understanding of what we mean by Friendly. If you don't think that is a philosophy question, I would point to the multiplicity of inconsistent moral theories throughout history to try to convince you otherwise.

Comment author: Bugmaster 29 November 2011 03:50:24AM 0 points [-]

Thanks, that does make sense. But, in this case, would "thinking like an AI programmer" really help you answer the question of "what we mean by Friendly" ? Of course, once we do get an answer, we'd need to implement it, which is where thinking like an AI programmer (or actually being one) would come in handy. But I think that's also an engineering challenge at that point.

FWIW, I know there are people out there who would claim that friendliness/morality is a scientific question, not a philosophical one, but I myself am undecided on the issue.

Comment author: Vaniver 29 November 2011 04:02:37AM 2 points [-]

But, in this case, would "thinking like an AI programmer" really help you answer the question of "what we mean by Friendly" ? Of course, once we do get an answer, we'd need to implement it, which is where thinking like an AI programmer (or actually being one) would come in handy. But I think that's also an engineering challenge at that point.

If you don't think like an AI programmer, you will be tempted to use concepts without understanding them well enough to program them. I don't think that's reduced to the level of 'engineering challenge.'

Comment author: Bugmaster 29 November 2011 04:12:55AM *  0 points [-]

Are you saying that it's impossible to correctly answer the question "what does 'friendly' mean ?" without understanding how to implement the answer by writing a computer program ? If so, why do you think that ?

Edit: added "correctly" in the sentence above, because it's trivially possible to just answer "bananas !" or something :-)

Comment author: DSimon 29 November 2011 04:27:02AM 5 points [-]

I don't think the division is so sharp as all that. Rather, what Vanvier is getting at, I think, is that one is capable of correctly and usefully answering the question "What does 'Friendy' mean?" in proportion to one's ability to reason algorithmically about subproblems of Friendliness.

Comment author: Bugmaster 29 November 2011 09:35:02PM 1 point [-]

I see, so you're saying that a philosopher who is not familiar with AI might come up with all kinds of philosophically valid definitions of friendliness, which would still be impossible to implement (using a reasonable amount of space and time) and thus completely useless in practice. That makes sense. And (presumably) if we assume that humans are kind of similar to AIs, then the AI-savvy philosopher's ideas would have immediate applications, as well.

So, that makes sense, but I'm not aware of any philosophers who have actually followed this recipe. It seems like at least a few such philosophers should exist, though... do they ?

Comment author: DSimon 29 November 2011 11:19:43PM *  0 points [-]

[P]hilosophically valid definitions of friendliness, which would still be impossible to implement (using a reasonable amount of space and time) and thus completely useless in practice.

Yes, or more sneakily, impossible to implement due to a hidden reliance on human techniques for which there is as-yet no known algorithmic implementation.

Programmers like to say "You don't truly understand how to perform a task until you can teach a computer to do it for you". A computer, or any other sort of rigid mathematical mechanism, is unable to make the 'common sense' connections that a human mind can make. We humans are so good at that sort of thing that we often make many such leaps in quick succession without even noticing!

Implementing an idea on a computer forces us to slow down and understand every step, even the ones we make subconsciously. Otherwise the implementation simply won't work. One doesn't get as thorough a check when explaining things to another human.

Philosophy in general is enriched by an understanding of math and computation, because it provides a good external view of the situation. This effect is of course only magnified when the philosopher is specifically thinking about how to represent human mental processes (such as volition) in a computational way.

Comment author: Bugmaster 29 November 2011 11:26:12PM 1 point [-]

I agree with most of what you said, except for this:

Yes, or more sneakily, impossible to implement due to a hidden reliance on human techniques for which there is as-yet no known algorithmic implementation.

Firstly, this is an argument for studying "human techniques", and devising algorithmic implementations, and not an argument for abandoning these techniques. Assuming the techniques are demonstrated to work reliably, of course.

Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.

Comment author: DSimon 29 November 2011 11:39:08PM *  1 point [-]

Firstly, this is an argument for studying "human techniques", and devising algorithmic implementations, and not an argument for abandoning these techniques.

Indeed, I should have been more specific; not all processes used in AI need to be analogous to humans, of course. All I meant was that it is very easy, when trying to provide a complete spec of a human process, to accidentally lean on other human mental processes that seem on zeroth-glance to be "obvious". It's hard to spot those mistakes without an outside view.

Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.

To a degree, though I suspect that even in an uploaded mind it would be tricky to isolate and copy-out individual techniques, since they're all likely to be non-locally-cohesive and heavily interdependent.

Comment author: Vaniver 29 November 2011 02:57:20PM 0 points [-]

Endorsed.