You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

nyan_sandwich comments on "Stupid" questions thread - Less Wrong Discussion

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alejandro1 13 July 2013 04:35:28PM 7 points [-]

It seems to me that there are basically two approaches to preventing an UFAI intelligence explosion: a) making sure that the first intelligence explosion is a a FAI instead; b) making sure that intelligence explosion never occurs. The first one involves solving (with no margin for error) the philosophical/ethical/logical/mathematical problem of defining FAI, and in addition the sociological/political problem of doing it "in time", convincing everyone else, and ensuring that the first intelligence explosion occurs according to this resolution. The second one involves just the sociological/political problem of convincing everyone of the risks and banning/discouraging AI research "in time" to avoid an intelligence explosion.

Naively, it seems to me that the second approach is more viable--it seems comparable in scale to something between stopping use of CFCs (fairly easy) and stopping global warming (very difficult, but it is premature to say impossible). At any rate, sounds easier than solving (over a few year/decades) so many hard philosophical and mathematical problems, with no margin for error and under time pressure to do it ahead of UFAI developing.

However, it seems (from what I read on LW and found quickly browsing the MIRI website; I am not particularly well informed, hence writing this on the Stupid Questions thread) that most of the efforts of MIRI are on the first approach. Has there been a formal argument on why it is preferable, or are there efforts on the second approach I am unaware of? The only discussion I found was Carl Shulman's "Arms Control and Intelligence Explosions" paper, but it is brief and nothing like a formal analysis comparing the benefits of each strategy. I am worried the situation might be biased by the LW/MIRI kind of people being more interested in (and seeing as more fun) the progress on the timeless philosophical problems necessary for (a) than the political coalition building and propaganda campaigns necessary for (b).

Comment author: [deleted] 13 July 2013 05:56:23PM 11 points [-]

Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.

Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).

So FAI is actually the easiest way to prevent UFAI.

The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.

Comment author: shminux 13 July 2013 06:03:59PM 0 points [-]

The other reason is that a Friendly Singleton would be totally awesome.

Uh, apparently my awesome is very different from your awesome. What scares me is this "Singleton" thing, not the friendly part.

Comment author: [deleted] 13 July 2013 06:15:23PM 11 points [-]

Hmmm. What is it going to do that is bad, given that it has the power to do the right thing, and is Friendly?

We have inherited some anti-authoritarian propaganda memes from a cultural war that is no longer relevant, and those taint the evaluation of a Singleton, even though they really don't apply. At least that's how it felt to me when I thought through it.

Comment author: pop 17 July 2013 03:29:24AM 1 point [-]

Upvoted.

I'm not sure why more people around here are not concerned about the singleton thing. It almost feels like yearning for a god on some people's part.

Comment author: pop 17 July 2013 03:24:46AM 1 point [-]

The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.

Your tone reminded me of super religious folk who are convinced that, say "Jesus is coming back soon!" and it'll be "totally awesome".

Comment author: [deleted] 17 July 2013 04:41:33AM 1 point [-]

That's nice.

Your comment reminds me of those internet atheists that are so afraid of being religious that they refuse to imagine how much better the world could be.

Comment author: pop 17 July 2013 05:03:14AM 0 points [-]

I do imagine how much better the world could be. I actually do want MIRI to succeed. Though currently I have low confidence in their future success, so I don't feel "bliss" (if that's the right word.)

BTW I'm actually slightly agnostic because of the simulation argument.

Comment author: [deleted] 17 July 2013 05:15:19AM 0 points [-]

I don't feel "bliss" (if that's the right word.)

Enthusiasm? Excitement? Hope?

I'm actually slightly agnostic because of the simulation argument.

Yep. I don't take it too seriously, but its at least coherent to imagine beings outside the universe who could reach in and poke at us.

Comment author: Alejandro1 13 July 2013 06:19:29PM 0 points [-]

But, in the current situation (or even a few years from now) would it be possible for a smart kid in a basement to build an AI from scratch? Isn't it something that still requires lots of progress to build on? See my reply to Qiaochu.

Comment author: Manfred 13 July 2013 07:35:32PM 1 point [-]

So will progress just stop for as long as we want it to?

Comment author: Alejandro1 13 July 2013 07:50:18PM 1 point [-]

The question is whether it would be possible to ban further research and stop progress (open, universally accessible and buildable-upon progress), in time for AGI to be still far away enough that an isolated group in a basement will have no chance of achieving it on its own.

Comment author: Manfred 13 July 2013 08:30:53PM 2 points [-]

If by "basement" you mean "anywhere, working in the interests of any organization that wants to gain a technology advantage over the rest of the world," then sure, I agree that this is a good question. So what do you think the answer is?

Comment author: Alejandro1 14 July 2013 02:18:52AM 3 points [-]

I have no idea! I am not a specialist of any kind in AI development. That is why I posted in the Stupid Questions thread asking "has MIRI considered this and made a careful analysis?" instead of making a top-level post saying "MIRI should be doing this". It may seem that in the subthread I am actively arguing for strategy (b), but what I am doing is pushing back against what I see as insufficient answers on such an important question.

So... what do you think the answer is?

Comment author: Manfred 14 July 2013 02:40:52AM 0 points [-]

If you want my answers, you'll need to humor me.