You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on "Stupid" questions thread - Less Wrong Discussion

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 13 July 2013 07:35:32PM 1 point [-]

So will progress just stop for as long as we want it to?

Comment author: Alejandro1 13 July 2013 07:50:18PM 1 point [-]

The question is whether it would be possible to ban further research and stop progress (open, universally accessible and buildable-upon progress), in time for AGI to be still far away enough that an isolated group in a basement will have no chance of achieving it on its own.

Comment author: Manfred 13 July 2013 08:30:53PM 2 points [-]

If by "basement" you mean "anywhere, working in the interests of any organization that wants to gain a technology advantage over the rest of the world," then sure, I agree that this is a good question. So what do you think the answer is?

Comment author: Alejandro1 14 July 2013 02:18:52AM 3 points [-]

I have no idea! I am not a specialist of any kind in AI development. That is why I posted in the Stupid Questions thread asking "has MIRI considered this and made a careful analysis?" instead of making a top-level post saying "MIRI should be doing this". It may seem that in the subthread I am actively arguing for strategy (b), but what I am doing is pushing back against what I see as insufficient answers on such an important question.

So... what do you think the answer is?

Comment author: Manfred 14 July 2013 02:40:52AM 0 points [-]

If you want my answers, you'll need to humor me.