dspeyer comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: dspeyer 11 May 2012 02:08:12AM *  2 points [-]

A tool+human differs from a pure AI agent in two important ways:

  • The human (probably) already has naturally-evolved morality, sparing us the very hard problem of formalizing that.

  • We can arrange for (almost) everyone to have access to the tool, allowing tooled humans to counterbalance eachother.

Comment author: TheOtherDave 11 May 2012 03:13:38AM 0 points [-]

Well, I certainly agree that both of those things are true.

And it might be that human-level evolved moral behavior is the best we can do... I don't know. It would surprise me, but it might be true.

That said... given how unreliable such behavior is, if human-level evolved moral behavior even approximates the best we can do, it seems likely that I would do best to work towards neither T nor A ever achieving the level of optimizing power we're talking about here.

Comment author: dspeyer 11 May 2012 03:23:45AM 4 points [-]

Humanity isn't that bad. Remember that the world we live in is pretty much the way humans made it, mostly deliberately.

But my main point was that existing humanity bypasses the very hard did-you-code-what-you-meant-to problem.

Comment author: TheOtherDave 11 May 2012 03:33:30AM 0 points [-]

I agree with that point.