Comment author: spuckblase 15 December 2011 03:47:12PM 1 point [-]

Ok, so who's the other one living in Berlin?

Comment author: cousin_it 07 December 2011 09:38:08PM *  9 points [-]

You have to be at least as smart as EY or Justin Corwin to describe the arguments that convince the human guardian. I wonder if the film's authors did some AI-box experiments of their own. I'm kinda sad that I never played the game myself (as AI, of course) for the shameful reason that people seem to think highly of me and losing would be a big reputation hit. If there are others who feel the same way, maybe we could set up some experiments where AI players are anonymous.

Comment author: spuckblase 08 December 2011 02:39:06PM 2 points [-]

If there are others who feel the same way, maybe we could set up some experiments where AI players are anonymous.

In that case, I'd like to participate as gatekeeper. I'm ready to put some money on the line.

BTW, I wonder if Clippy would want to play a human, too. I

Comment author: spuckblase 24 November 2011 03:03:25PM *  1 point [-]

Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant: To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain. As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on... [But if] there are systems that produce apparently superintelligent outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact on the rest of the world. Chalmers (2010) summarizes two arguments suggesting that machines can reach human-level general intelligence: * The emulation argument (see section 7.3) * The evolutionary argument (see section 7.4)

This whole paragraph doesn't seem to belong to section 1.11.

Comment author: shminux 16 November 2011 08:41:49PM *  6 points [-]

it is standard in a rational discourse to include and address opposing arguments, provided your audience includes anyone other than supporters already. At a minimum, one should state an objection and cite a discussion of it. Here is a number of points that might have be worth mentioning:

We may one day design a machine that surpasses human skill at designing artificial intelligences.

Are there any alternatives?

superintelligence represents an 'event horizon' beyond which humans cannot model the future

We have trouble modeling the future already (our world is probably rather unlike what experts had predicted 25 years ago). If the horizon is the limit of the shrinking predictability timescale, what is the arguments for and against this scale being a monotonically decreasing function?

Technological progress enables even faster technological progress.

Similar to the one above. Sometimes is slows down, halts or reverses for decades or centuries.

I assume that your citations address these questions, but it is useful to state the obvious objections, so the reader is not left hanging.

A technical point:

He made an analogy to the event horizon of a black hole, beyond which the predictive power of physics at the gravitational singularity breaks down.

Physics works mighty fine at the event horizon, predicting what happens to something crossing it with any desired accuracy. It only breaks down at or near the singularity, whether or not it is shrouded by a horizon (not all singularities have to be). While the event horizon is a cute popsci analogy, it should be treated as such, without making false physical statements.

Comment author: spuckblase 17 November 2011 09:06:22AM 0 points [-]

it is standard in a rational discourse to include and address opposing arguments, provided your audience includes anyone other than supporters already. At a minimum, one should state an objection and cite a discussion of it.

This is not a rational discourse but part of an FAQ, providing explanations/definitions. Counterarguments would be misplaced.

Comment author: spuckblase 15 November 2011 06:46:15PM 0 points [-]

For those who read german or can infer the meaning: Philosopher Cristoph Fehige shows a way to embrace utilitarianism and dust specks.

Comment author: spuckblase 15 November 2011 10:56:18AM 0 points [-]

"Literalness" is explained in sufficient detail to get a first idea of the connection to FAI, but "Superpower" is not.

Comment author: spuckblase 15 November 2011 09:59:10AM 0 points [-]

going back to the 1956 Dartmouth conference on AI

maybe better (if this is good english): going back to the seminal 1956 Dartmouth conference on AI

Comment author: spuckblase 15 November 2011 09:03:26AM 2 points [-]

There are many types of digital intelligence. To name just four:

Readers might like to know what the others are and why you chose those four.

Comment author: spuckblase 11 November 2011 05:04:25PM 0 points [-]

Relevant? (A fake ad by renowned artist Katerina Jebb)

Comment author: spuckblase 07 November 2011 02:46:38PM *  4 points [-]

Die Forscher kombinieren Daten aus Informatik und psychologischen Studien. Ihr Ziel: Eine Not-to-do-Liste, die jedes Unternehmen bekommt, das an künstlicher Intelligenz arbeitet.

Rough translation:

The researchers combine data from computer science and psychological studies. Their goal: a not-to-do list, given to every organization working on artificial intelligence.

View more: Prev | Next