Earlier, I provided an overview of formal epistemology, a field of philosophy highly relevant to the discussions on Less Wrong. Today I do the same for another branch of philosophy: the philosophy of artificial intelligence (here's another overview).

Some debate whether machines can have minds at all. The most famous argument against machines achieving general intelligence comes from Hubert Dreyfus. The most famous argument against the claim that an AI can have mental states is John Searle's Chinese Room argument, to which there are many replies. The argument comes in several variations. Most Less Wrongers have already concluded that yes, machines can have minds. Others debate whether machines can be conscious

There is much debate on the significance of variations on the Turing Test. There is also lots of interplay between artificial intelligence work and philosophical logic. There is some debate over whether minds are multiply realizable, though most accept that they are. There is some literature on the problem of embodied cognition - human minds can only do certain things because of their long development; can these achievements be replicated in a machine written "from scratch"?

Of greater interest to me and perhaps most Less Wrongers is the ethics of artificial intelligence. Most of the work here so far is on the rights of robots. For Less Wrongers, the more pressing concern is that of creating AIs that behave ethically. (In 2009, robots programmed to cooperate evolved to lie to each other.) Perhaps the most pressing is the need to develop Friendly AI, but as far as I can find, no work on Good's intelligence explosion singularity idea has been published in a major peer-reviewed journal except for David Chalmers' "The Singularity: A Philosophical Analysis" (Journal of Consciousness Studies 17: 7-65). The next closest thing may be something like "On the Morality of Artificial Agents" by Floridi & Sanders.

Perhaps the best overview of the philosophy of artificial intelligence is chapter 26 of Russell & Norvig's Artificial Intelligence: A Modern Approach.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 9:07 PM
[-][anonymous]13y20

as far as I can find, no work on Good's intelligence explosion singularity idea has been published in a major peer-reviewed journal

This is something that's bothered me for a while now--does have any ideas as to why this is the case? And what would it take to change this?

In any case, this post is an excellent compilation of links.

As Luke said, replies are on the way. Among others, Dennett, Churchland, Dreyfus, (Paul) Churchland, Jesse Prinz, and Kevin Kelly (of Ockham efficient convergence fame), agreed to reply in an upcoming issue of the Journal of Consciousness Studies, according to Chalmers' blog.

I will be having someone rid the papers of author information and self-referential remarks before reading them.

What would it take to change this? Have a leading philosopher of mind publish a large paper on it in a major journal.

Chalmers published his paper a few months ago. Just wait. The papers are coming, now. Philosophy moves slowly.

I read the Hubert Dreyfus book you linked to (had to buy it used on Amazon). I don't feel it gave me a deeper understanding of the problem or convinced me of any specific insurmountable difficulty in achieving AGI. (And that was quite a disappointment given all the attention and recommendation I saw for it.) All that I found memorable about it was a long string of, "They tried this ... but the problem is really hard" over and over.

Do you (or anyone else here) have a more positive appraisal of what can be learned from it?