XiXiDu comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 14 March 2012 08:24:36PM 2 points [-]

People have suggested dozens of scenarios, from taking over the Internet to hacking militaries to producing nanoassemblers & eating everything.

But none of them make any sense to me, see below.

That is not what you said. I'll requote it:

Wait, your quote said what I said I said you said I didn't say.

Because that's the best way to take over...

I have no idea. You don't have any idea either or you'd have told me by now. You are just saying that magic will happen and the world will be ours. That's the problem with risks from AI.

Let's see, I'm too dumb to see that I am systematically underinvesting in research despite the high returns when I do investigate something other than X, and apparently I'm also too dumb to notice that I am underperforming compared to those oh-so-diverse humans' research programs.

See, that's the problem. The AI can't acquire the resources that are necessary to acquire resources in the first place. It might figure out that it will need to pursue various strategies or build nanoassemblers, but how does it do that?

Taking over the Internet is no answer, because the question is how. Building nanoassemblers is no answer, because the question is how.

Comment author: gwern 14 March 2012 09:09:39PM 3 points [-]

I have no idea. You don't have any idea either or you'd have told me by now. You are just saying that magic will happen and the world will be ours. That's the problem with risks from AI.

We have plenty of ideas. Yvain posted a Discussion thread filled with ideas how. "Alternate history" is an old sub-genre dating back at least to Mark Twain (who makes many concrete suggestions about how his Connecticut yankee would do something similar).

But what's the point? See my reply to Bugmaster - it's impossible or would defeat the point of the discussion to actually execute the strategies, and anything short of execution is vulnerable to 'that's magic!11!!1'

The AI can't acquire the resources that are necessary to acquire resources in the first place. It might figure out that it will need to pursue various strategies or build nanoassemblers, but how does it do that?

By reading the many discussions of what could go wrong and implementing whatever is easiest, like hacking computers. Oh the irony!