lukeprog comments on Less Wrong Rationality and Mainstream Philosophy - Less Wrong

106 Post author: lukeprog 20 March 2011 08:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (328)

You are viewing a single comment's thread.

Comment author: lukeprog 22 May 2011 03:57:01AM 10 points [-]

Philosophy quote of the day:

I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory.

Aaron Sloman (1978)

Comment author: Perplexed 22 May 2011 04:30:34AM *  9 points [-]

According to the link:

Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science.

So, we have a spectacular mis-estimation of the time frame - claiming 33 years ago that AI would be seen as important "within a few years". That is off by one order of magnitude (and still counting!) Do we blame his confusion on the fact that he is a philosopher, or was the over-optimism a symptom of his activity as an AI researcher? :)

ETA:

as irresponsible as giving a degree course in physics which includes no quantum theory.

I'm not sure I like the analogy. QM is foundational for physics, while AI merely shares some (as yet unknown) foundation with all those mind-oriented branches of philosophy. A better analogy might be "giving a degree course in biology which includes no exobiology".

Hmmm. I'm reasonably confident that biology degree programs will not include more than a paragraph on exobiology until we have an actual example of exobiology to talk about. So what is the argument for doing otherwise with regard to AI in philosophy?

Oh, yeah. I remember. Philosophers, unlike biologists, have never shied away from investigating things that are not known to exist.

Comment author: ata 22 May 2011 06:32:22AM 4 points [-]

So, we have a spectacular mis-estimation of the time frame - claiming 33 years ago that AI would be seen as important "within a few years".

He didn't necessarily predict that AI would be seen as important in that timeframe; what he said was that if it wasn't, philosophers would have to be incompetent and their teaching irresponsible.

Comment author: wedrifid 22 May 2011 06:46:19AM *  5 points [-]

what he said was that if it wasn't, philosophers would have to be incompetent and their teaching irresponsible.

Full marks... but let's be honest, he doesn't get too many difficulty points for making that prediction...

Comment author: lukeprog 24 May 2011 05:04:08AM 0 points [-]

I didn't read the whole article. Where did Sloman claim that AI would be seen as important within a few years?

Comment author: Perplexed 24 May 2011 03:26:02PM 0 points [-]

Where did Sloman claim that AI would be seen as important within a few years?

I inferred that he would characterize it as important in that time frame from:

... within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence ...

together with a (perhaps unjustified) assumption that philosophers refrain from calling their colleagues "professionally incompetent" unless the stakes are important. And that they generally do what is fair.