You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

NancyLebovitz comments on "Stupid" questions thread - Less Wrong Discussion

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alejandro1 13 July 2013 04:35:28PM 7 points [-]

It seems to me that there are basically two approaches to preventing an UFAI intelligence explosion: a) making sure that the first intelligence explosion is a a FAI instead; b) making sure that intelligence explosion never occurs. The first one involves solving (with no margin for error) the philosophical/ethical/logical/mathematical problem of defining FAI, and in addition the sociological/political problem of doing it "in time", convincing everyone else, and ensuring that the first intelligence explosion occurs according to this resolution. The second one involves just the sociological/political problem of convincing everyone of the risks and banning/discouraging AI research "in time" to avoid an intelligence explosion.

Naively, it seems to me that the second approach is more viable--it seems comparable in scale to something between stopping use of CFCs (fairly easy) and stopping global warming (very difficult, but it is premature to say impossible). At any rate, sounds easier than solving (over a few year/decades) so many hard philosophical and mathematical problems, with no margin for error and under time pressure to do it ahead of UFAI developing.

However, it seems (from what I read on LW and found quickly browsing the MIRI website; I am not particularly well informed, hence writing this on the Stupid Questions thread) that most of the efforts of MIRI are on the first approach. Has there been a formal argument on why it is preferable, or are there efforts on the second approach I am unaware of? The only discussion I found was Carl Shulman's "Arms Control and Intelligence Explosions" paper, but it is brief and nothing like a formal analysis comparing the benefits of each strategy. I am worried the situation might be biased by the LW/MIRI kind of people being more interested in (and seeing as more fun) the progress on the timeless philosophical problems necessary for (a) than the political coalition building and propaganda campaigns necessary for (b).

Comment author: NancyLebovitz 13 July 2013 05:38:35PM 6 points [-]

There's a third alternative, though it's quite unattractive: damaging civilization to the point that AI is impossible.

Comment author: Jonathan_Graehl 16 July 2013 06:37:25AM 0 points [-]

Given that out of billions, a few with extremely weird brains are likely to see evil-AI risk as nigh.

One of them is bound to push the red button way before I or anyone else would reach for it.

So I hope red technology-resetting buttons don't become widely available.

This suggests a principle: I have a duty to be conservative in my own destroy-the-world-to-save-it projects :)

Comment author: timtyler 14 July 2013 11:36:29AM -1 points [-]

Isn't that already covered by option b?

Comment author: arborealhominid 14 July 2013 12:09:07AM -1 points [-]

And there are, in fact, several people proposing this as a solution to other anthropogenic existential risks. Here's one example.

Comment author: NancyLebovitz 14 July 2013 02:07:42AM *  3 points [-]

I would like to think that people who have the answer for each question in their FAQ open in a new tab aren't competent enough to do much of anything. This is probably wishful thinking.

Just to mention a science fiction handling.... John Barnes' Daybreak books. I think the books are uneven, but the presentation of civilization-wrecking as an internet artifact is rather plausible.

Comment author: hairyfigment 14 July 2013 06:06:46AM 0 points [-]

Not only that, they're talking about damaging civilization and they have an online FAQ in the first place. They fail Slytherin forever.

Comment author: Viliam_Bur 14 July 2013 11:49:34AM *  2 points [-]

My model of humans says that some people will read their page, become impressed and join them. I don't know how much, but I think that the only thing that stops millions of people from joining them is that there already are thousands of crazy ideas out there competing with each other, so the crazy people remain divided.

Also, the website connects destroying civilization with many successful applause lights. (Actually, it seems to me like a coherent extrapolation of them; although that could be just my mindkilling speaking.) That should make it easier to get dedicated followers.

Destroying civilization is too big goal for them, but they could make some serious local damage.

Comment author: hairyfigment 14 July 2013 08:29:34PM -1 points [-]

My model of humans says that some people will read their page, become impressed and join them

And my model of the government says this has negative expected value overall.

Comment author: D_Malik 15 July 2013 10:53:16PM 1 point [-]

An example of this: the Earth Liberation Front and Animal Liberation Front got mostly dismantled by CIA infiltrators as soon as they started getting media attention.

Comment author: NancyLebovitz 14 July 2013 01:35:18PM 1 point [-]

Our standards for Slytherin may be too high.

Comment author: hairyfigment 14 July 2013 08:28:58PM -2 points [-]

I don't get it. None of us here set the standards, unless certain donors have way more connections than I think they do.

Comment author: NancyLebovitz 15 July 2013 04:48:26AM 2 points [-]

You're the one who mentioned failing Slytherin forever.

My actual belief is that people don't have to be ideally meticulous to do a lot of damage.