XiXiDu comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 10 March 2012 08:50:21PM 5 points [-]

I, for one, am ultimately concerned with doing whatever's best. I'm not wedded to doing FAI, and am certainly not wedded to doing 9-researchers-in-a-basement FAI.

Well, that's great. Still, there are quite a few problems.

How do I know

  • ... that SI does not increase existential risk by solving problems that can be used to build AGI earlier?
  • ... that you won't launch a half-baked friendly AI that will turn the world into a hell?
  • ... that you don't implement some strategies that will do really bad things to some people, e.g. myself?

Every time I see a video of one of you people I think, "Wow, those seem like really nice people. I am probably wrong. They are going to do the right thing."

But seriously, is that enough? Can I trust a few people with the power to shape the whole universe? Can I trust them enough to actually give them money? Can I trust them enough with my life until the end of the universe?

You can't even tell me what "best" or "right" or "winning" stands for. How do I know that it can be or will be defined in a way that those labels will apply to me as well?

I have no idea what your plans are for the day when time runs out. I just hope that you are not going to hope for the best and run some not quite friendly AI that does really crappy things. I hope you consider the possibility of rather blowing everything up than risking even worse outcomes.

Comment author: lukeprog 11 March 2012 08:17:45AM *  3 points [-]

Can I trust a few people with the power to shape the whole universe?

Hell no.

This is an open problem. See "How can we be sure a Friendly AI development team will be altruistic?" on my list of open problems.

Comment author: timtyler 11 March 2012 02:00:30PM 1 point [-]

I hope you consider the possibility of rather blowing everything up than risking even worse outcomes.

Blowing everying up would be pretty bad. Bad enough to not encourage the possibility.