nyan_sandwich comments on "Stupid" questions thread - Less Wrong

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 13 July 2013 05:56:23PM 11 points [-]

Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.

Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).

So FAI is actually the easiest way to prevent UFAI.

The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.

Comment author: shminux 13 July 2013 06:03:59PM 0 points [-]

The other reason is that a Friendly Singleton would be totally awesome.

Uh, apparently my awesome is very different from your awesome. What scares me is this "Singleton" thing, not the friendly part.

Comment author: [deleted] 13 July 2013 06:15:23PM 11 points [-]

Hmmm. What is it going to do that is bad, given that it has the power to do the right thing, and is Friendly?

We have inherited some anti-authoritarian propaganda memes from a cultural war that is no longer relevant, and those taint the evaluation of a Singleton, even though they really don't apply. At least that's how it felt to me when I thought through it.

Comment author: pop 17 July 2013 03:29:24AM 1 point [-]

Upvoted.

I'm not sure why more people around here are not concerned about the singleton thing. It almost feels like yearning for a god on some people's part.

Comment author: pop 17 July 2013 03:24:46AM 1 point [-]

The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.

Your tone reminded me of super religious folk who are convinced that, say "Jesus is coming back soon!" and it'll be "totally awesome".

Comment author: [deleted] 17 July 2013 04:41:33AM 1 point [-]

That's nice.

Your comment reminds me of those internet atheists that are so afraid of being religious that they refuse to imagine how much better the world could be.

Comment author: pop 17 July 2013 05:03:14AM 0 points [-]

I do imagine how much better the world could be. I actually do want MIRI to succeed. Though currently I have low confidence in their future success, so I don't feel "bliss" (if that's the right word.)

BTW I'm actually slightly agnostic because of the simulation argument.

Comment author: [deleted] 17 July 2013 05:15:19AM 0 points [-]

I don't feel "bliss" (if that's the right word.)

Enthusiasm? Excitement? Hope?

I'm actually slightly agnostic because of the simulation argument.

Yep. I don't take it too seriously, but its at least coherent to imagine beings outside the universe who could reach in and poke at us.

Comment author: Alejandro1 13 July 2013 06:19:29PM 0 points [-]

But, in the current situation (or even a few years from now) would it be possible for a smart kid in a basement to build an AI from scratch? Isn't it something that still requires lots of progress to build on? See my reply to Qiaochu.

Comment author: Manfred 13 July 2013 07:35:32PM 1 point [-]

So will progress just stop for as long as we want it to?

Comment author: Alejandro1 13 July 2013 07:50:18PM 1 point [-]

The question is whether it would be possible to ban further research and stop progress (open, universally accessible and buildable-upon progress), in time for AGI to be still far away enough that an isolated group in a basement will have no chance of achieving it on its own.

Comment author: Manfred 13 July 2013 08:30:53PM 2 points [-]

If by "basement" you mean "anywhere, working in the interests of any organization that wants to gain a technology advantage over the rest of the world," then sure, I agree that this is a good question. So what do you think the answer is?

Comment author: Alejandro1 14 July 2013 02:18:52AM 3 points [-]

I have no idea! I am not a specialist of any kind in AI development. That is why I posted in the Stupid Questions thread asking "has MIRI considered this and made a careful analysis?" instead of making a top-level post saying "MIRI should be doing this". It may seem that in the subthread I am actively arguing for strategy (b), but what I am doing is pushing back against what I see as insufficient answers on such an important question.

So... what do you think the answer is?

Comment author: Manfred 14 July 2013 02:40:52AM 0 points [-]

If you want my answers, you'll need to humor me.