XiXiDu comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 14 March 2012 07:16:24PM *  4 points [-]

Show me how that is going to work out. Or at least outline how a smarter-than-human AI is supposed to take over the world. Why is nobody doing that?

People have suggested dozens of scenarios, from taking over the Internet to hacking militaries to producing nanoassemblers & eating everything. The scenarios will never be enough for critics because until they are actually executed there will always be some doubt that they would work - at which point there would be no need to discuss them any more. Just like in cryonics (if you already had the technology to revive someone, there would be no need to discuss whether it would work). This is intrinsic to any discussion of threats that have not already struck or technologies which don't already exist.

I am reminded of the quote, "'Should we trust models or observations?' In reply we note that if we had observations of the future, we obviously would trust them more than models, but unfortunately observations of the future are not available at this time."

100 people are not enough to produce and employ any toxic gas or bombs in a way that would defeat a wide-stretched empire with many thousands of people.

Because that's the best way to take over...

I said that it is highly speculative that there exists a simple algorithm that would constitute a consequentialist AI with simple values that could achieve the same as aforementioned society of minds and therefore work better than evolution. You just turned that into "XiXiDu believes that simple algorithms can't exhibit creativity."

That is not what you said. I'll requote it:

Complex values are the cornerstone of diversity, which in turn enables creativity and drives the exploration of various conflicting routes. A singleton with a stable utility-function lacks the feedback provided by a society of minds and its cultural evolution...An AI with simple values will simply lack the creativity, due to a lack of drives, to pursue the huge spectrum of research that a society of humans does pursue. Which will allow an AI to solve some well-defined narrow problems, but it will be unable to make use of the broad range of synergetic effects of cultural evolution. Cultural evolution is a result of the interaction of a wide range of utility-functions.

If a singleton lacks feedback from diversity and something which is the 'cornerstone' of diversity is something a singleton cannot have... This is actually even stronger a claim than simple algorithms, because a singleton could be a very complex algorithm. (You see how charitable I'm being towards your claims? Yet no one appreciates it.)

And that's not even getting into your claim about spectrum of research, which seems to impute stupidity to even ultraintelligent agents.

('Let's see, I'm too dumb to see that I am systematically underinvesting in research despite the high returns when I do investigate something other than X, and apparently I'm also too dumb to notice that I am underperforming compared to those oh-so-diverse humans' research programs. Gosh, no wonder I'm failing! I wonder why I am so stupid like this, I can't seem to find any proofs of it.')

Comment author: XiXiDu 14 March 2012 08:24:36PM 2 points [-]

People have suggested dozens of scenarios, from taking over the Internet to hacking militaries to producing nanoassemblers & eating everything.

But none of them make any sense to me, see below.

That is not what you said. I'll requote it:

Wait, your quote said what I said I said you said I didn't say.

Because that's the best way to take over...

I have no idea. You don't have any idea either or you'd have told me by now. You are just saying that magic will happen and the world will be ours. That's the problem with risks from AI.

Let's see, I'm too dumb to see that I am systematically underinvesting in research despite the high returns when I do investigate something other than X, and apparently I'm also too dumb to notice that I am underperforming compared to those oh-so-diverse humans' research programs.

See, that's the problem. The AI can't acquire the resources that are necessary to acquire resources in the first place. It might figure out that it will need to pursue various strategies or build nanoassemblers, but how does it do that?

Taking over the Internet is no answer, because the question is how. Building nanoassemblers is no answer, because the question is how.

Comment author: gwern 14 March 2012 09:09:39PM 3 points [-]

I have no idea. You don't have any idea either or you'd have told me by now. You are just saying that magic will happen and the world will be ours. That's the problem with risks from AI.

We have plenty of ideas. Yvain posted a Discussion thread filled with ideas how. "Alternate history" is an old sub-genre dating back at least to Mark Twain (who makes many concrete suggestions about how his Connecticut yankee would do something similar).

But what's the point? See my reply to Bugmaster - it's impossible or would defeat the point of the discussion to actually execute the strategies, and anything short of execution is vulnerable to 'that's magic!11!!1'

The AI can't acquire the resources that are necessary to acquire resources in the first place. It might figure out that it will need to pursue various strategies or build nanoassemblers, but how does it do that?

By reading the many discussions of what could go wrong and implementing whatever is easiest, like hacking computers. Oh the irony!