Perplexed comments on Convergence Theories of Meta-Ethics - Less Wrong

7 Post author: Perplexed 07 February 2011 09:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 08 February 2011 09:44:47PM 1 point [-]

So to summarize, you conclusion seems to be that we should build an arbitrary-goals AI as soon as possible.

Huh? What exactly do you think you are summarizing? If you want to produce a cartoon version of my opinions on this thread, try "We should do all we can to avoid the FOOMing singleton scenario, instead trying to create a society of reproducing AIs, interlocked with each other and with humanity by a network of dependencies. If we do, the details of the initial goal systems may matter less than they would with a singleton."

Comment author: Vladimir_Nesov 08 February 2011 10:04:00PM 1 point [-]

I see, so "if there is convergence" is not a point of theoretical uncertainty, but something that depends on the way the AIs are built. Makes sense (as a position, not something I agree with).

But the whole point of my posting was that, if there is convergence (in the second sense) then those initial values may make very little difference in the outcome of the universe

Comment author: Perplexed 08 February 2011 10:36:16PM *  0 points [-]

I see, so "if there is convergence" is not a point of theoretical uncertainty, but something that depends on the way the AIs are built.

Well, it is both. Convergence in the sense of "outcome is independent of the starting point" has not been proved for any AI/updating architecture. Also, I strongly suspect that the detailed outcome will depend quite a bit on the way AIs interact and produce successors/self-updates, even if the fact of convergence does not.

Comment author: timtyler 09 February 2011 01:40:24AM 0 points [-]

<cartoon>We should do all we can to avoid the FOOMing singleton scenario, instead trying to create a society of reproducing AIs, interlocked with each other and with humanity by a network of dependencies.</cartoon>

That reminds me of:

"An AGI raised in a box could become dangerously solipsistic, probably better to raise AGIs embedded in the social network..."

Comment author: Perplexed 09 February 2011 05:15:36AM *  0 points [-]

Goertzel's comment doesn't even make sense to me. Why is he placing 'in a box' in contraposition to 'embedded in the social network'. The two issues are orthogonal. AIs can be social or singleton - either in a box or in the real world. ETA: Well, if you mean the human social network, then I suppose a boxed AI cannot participate. Though I suppose we could let some simulated humans into the box to keep the AI company.

Besides, I've never really considered solipsists to be any more dangerous than anyone else.

Comment author: timtyler 09 February 2011 01:55:26PM *  0 points [-]

Besides, I've never really considered solipsists to be any more dangerous than anyone else.

"Now I will destroy the whole world - What a Bokononist says before committing suicide."

Comment author: timtyler 09 February 2011 08:40:40AM *  0 points [-]

Though I suppose we could let some simulated humans into the box [...]

We don't have any half-decent simulated humans, though.