timtyler comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 30 October 2010 07:23:13PM *  7 points [-]

One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.

That is probably because you don't share a definition of intelligence with most of those here.

Perhaps look through http://www.vetta.org/definitions-of-intelligence/ - and see if you can find your position.

Comment author: mwaser 01 November 2010 01:43:47PM -2 points [-]

Nope. I agree with the vast majority of the vetta definitions.

But let's go with Marcus Hutter - "There are strong arguments that AIXI is the most intelligent unbiased agent possible in the sense that AIXI behaves optimally in any computable environment."

Now, which is more optimal -- opting to play a positive-sum game of potentially infinite length and utility with cooperating humans OR passing up the game forever for a modest short-term gain?

Assume, for the purposes of argument, that the AGI does not have an immediate pressing need for the gain (since we could then go into a recursion of how pressing is the need -- and yes, if the need is pressing enough, the intelligent thing to do unless the agent's goal is to preserve humanity is to take the short-term gain and wipe out humanity -- but how would a super-intelligent AGI have gotten itself into that situation?). This should answer all of the questions about "Well, what if the AGI had a short-term preference and humans weren't it".

Comment author: gwern 01 November 2010 05:13:54PM 3 points [-]

I am jumping in here from Recent Comments, so perhaps I am missing context - but how is AIXI interacting with humanity an infinite positive-sum gain for it?

It doesn't seem like AIXI could even expect zero-sum gains from humanity: we are using up a lot of what could be computronium.

Comment author: timtyler 01 November 2010 09:08:15PM *  2 points [-]

That definition doesn't explicity mention goals. Many of the definitions do explicity mention goals. What the definitions usually don't mention is what those goals are - and that permits super-villains, along the lines of General Zod.

If (as it appears) you want to argue that evolution is likely to produce super-saints - rather than super-villains - then that's a bit of a different topic. If you wanted to argue that, "requirement" was probably the wrong way of putting it.