jacob_cannell comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 30 January 2011 02:18:50AM *  0 points [-]

Irrelevant. Assume you magically have a perfect working simulation of yourself.

Relevant - Can we just assume you magically have a friendly AI then?

If the plan for creating a friendly AI depends on a non-destructive full-brain scan already being available, the odds of achieving friendly AI before other forms of AI vanish to near zero.

Comment author: Vladimir_Nesov 30 January 2011 02:23:02AM 0 points [-]

One step at a time, my good sir! Reducing the philosophical and mathematical problem of Friendly AI to the technological problem of uploading would be an astonishing breakthrough quite by itself.

Comment author: jacob_cannell 30 January 2011 02:39:21AM -1 points [-]

I think this reflects the practical problem with Friendly AI - it is an ideal of perfection taken to an extreme that expands the problem scope far beyond what is likely to be near term realizable.

I expect that most of the world, research teams, companies, the VC community and so on will be largely happy with an AGI that just implements an improved version of the human mind.

For example, humans have an ability to model other agents and their goals, and through love/empathy value the well-being of others as part of our own individual internal goal systems.

I don't see yet why that particular system is difficult or more complex than the rest of AGI.

It seems likely that once we can build an AGI as good as the brain we can build one that is human-like but only has the love/empathy circuitry in it's goal system with the rest of the crud stripped out.

In other words if we can build AGI's modeled after the best components of the best examples of altruistic humans, this should be quite sufficient.