jacob_cannell comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 30 January 2011 02:39:21AM -1 points [-]

I think this reflects the practical problem with Friendly AI - it is an ideal of perfection taken to an extreme that expands the problem scope far beyond what is likely to be near term realizable.

I expect that most of the world, research teams, companies, the VC community and so on will be largely happy with an AGI that just implements an improved version of the human mind.

For example, humans have an ability to model other agents and their goals, and through love/empathy value the well-being of others as part of our own individual internal goal systems.

I don't see yet why that particular system is difficult or more complex than the rest of AGI.

It seems likely that once we can build an AGI as good as the brain we can build one that is human-like but only has the love/empathy circuitry in it's goal system with the rest of the crud stripped out.

In other words if we can build AGI's modeled after the best components of the best examples of altruistic humans, this should be quite sufficient.