XiXiDu comments on Open Thread: July 2010, Part 2 - Less Wrong

6 Post author: Alicorn 09 July 2010 06:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (770)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 05 August 2010 09:56:27AM *  25 points [-]

Something really crazy is going on here.

You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, "it could increase the chance of an AI going wrong...".

"I deleted my comment because it was maybe going to increase the chance of an AI going wrong..."

"Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid..."

"Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong."

I'm beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.

Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.

Comment author: Blueberry 05 August 2010 10:46:58AM 15 points [-]

This is why deleting the comments was the bigger risk: doing so makes people think (incorrectly) that EY and this movement are the bigger risk, instead of unfriendly AI.

Comment author: XiXiDu 05 August 2010 11:09:20AM *  11 points [-]

The problem is, are you people sure you want to take this route? If you are serious about all this, what would stop you from killing a million people if your probability estimates showed that there was a serious risk posed by those people?

If you read this comment thread you'll see what I mean and what danger there might be posed by this movement, 'follow Eliezer', 'donating as much as possible to SIAI', 'kill a whole planet', 'afford to leave one planet's worth', 'maybe we could even afford to leave their brains unmodified'...lesswrong.com sometimes makes me feel more than a bit uncomfortable, especially if you read between the lines.

Yes, you might be right about all the risks in question. But you might be wrong about the means of stopping the same.

Comment author: Blueberry 06 August 2010 09:59:19AM *  6 points [-]

I'm not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don't think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don't see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.

As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn't be taken to reflect anything about anyone else.

Comment author: XiXiDu 06 August 2010 10:40:08AM *  13 points [-]

I know, it wasn't my intention to discredit Peer, I quite like his ideas. I'm probably more crazy than him anyway.

But if I can come up with such conclusions, who else will? Also, why isn't anyone out to kill people, or will be? I'm serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon 'mere' probability estimates, how wouldn't it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn't this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.

This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I'm wondering about the possible reaction to a imminent and tangible danger.

Comment deleted 23 September 2010 05:21:40AM [-]
Comment deleted 24 September 2010 06:42:21AM [-]
Comment deleted 24 September 2010 06:59:47AM [-]
Comment deleted 06 August 2010 11:40:19AM *  [-]
Comment deleted 06 August 2010 01:49:02PM [-]
Comment deleted 06 August 2010 01:56:18PM [-]
Comment deleted 06 August 2010 10:02:39PM [-]
Comment deleted 06 August 2010 11:58:26AM *  [-]
Comment author: Blueberry 06 August 2010 12:32:53PM *  7 points [-]

Heh, that makes Roko's scenario similar to the Missionary Paradox: if only those who know about God but don't believe go to hell, it's harmful to spread the idea of God. (As I understand it, this doesn't come up because most missionaries think you'll go to hell even if you don't know about the idea of God.)

But I don't think any God is supposed to follow a human CEV; most religions seem to think it's the other way around.

Comment author: NancyLebovitz 06 August 2010 02:00:27PM 4 points [-]

The more recent analysis I've read says that people pretty much become suicide bombers for nationalist reasons, not religious reasons.

I suppose that "There should not be American bases on the sacred soil of Saudi Arabia" is a hybrid of the two, and so might be "I wanted to kill because Muslims were being hurt"-- it's a matter of group identity more than "Allah wants it".

I don't have specifics for the 9/11 bombers.

Comment author: katydee 06 August 2010 12:10:55PM 0 points [-]

Either I misunderstand CEV, or the above statement re: the Abrahamic god following CEV is false.

Comment author: XiXiDu 06 August 2010 12:19:37PM 2 points [-]

Coherent Extrapolated Volition

Long distance: An extrapolated volition your present-day self finds incomprehensible; not outrageous or annoying, but blankly incomprehensible.

This is exactly the argument religious people use to excuse any shortcomings of their personal FAI. Namely, their personal FAI knows better than you what's best for your AND everyone else.

What average people do is follow what is being taught here on LW. They decide based on their prior. Their probability estimates tell them that their FAI is likely to exist and make up excuses for extraordinary decisions based on the possible existence of it. That is, support their FAI while trying to inhibit other uFAI all in the best interest of the world at large.

Comment author: orthonormal 06 August 2010 09:57:49PM 1 point [-]

Yahweh and the associated moral system are far from incomprehensible if you know the cultural context of the Israelites. It's a recognizably human morality, just a brutal one obsessed with purity of various sorts.

Comment author: katydee 06 August 2010 09:50:37PM 1 point [-]

The link and quotation you posted do not seem to back up your argument that the Abrahamic god follows CEV. Could you clarify?

Comment author: Blueberry 06 August 2010 12:30:14PM 0 points [-]

If an AI does what Roko suggested, it's not friendly. We don't know what, if anything, CEV will output, but I don't see any reason to think CEV would enact Roko's scenario.

Comment author: cousin_it 06 August 2010 12:33:45PM *  0 points [-]

Roko thinks (or thought) it would. I do too. Can't argue it in detail here, sorry.

Comment author: [deleted] 03 March 2012 10:26:01PM 0 points [-]

Until about a month ago, I would have agreed, but some posts I have since read on LW made me update the probability of CEV wanting that upwards.

Comment author: Blueberry 25 March 2012 01:20:26AM 0 points [-]

Really, please explain (or PM me if it would require breaking the gag rule on Roko's scenario). Why would CEV want that?

Comment author: Blueberry 25 March 2012 12:54:50AM -2 points [-]

Really, please explain (or PM me if it would require breaking the gag rule on Roko's scenario). Why would CEV want that?

Comment author: wedrifid 25 March 2012 02:40:58AM *  1 point [-]

Why would CEV want that?

Because 'CEV' must be instantiated on a group of agents (usually humans). Some humans are assholes. So for some value of aGroup, CEV<aGroup> does assholish things. Hopefully the group of all humans doesn't create a CEV that makes FAI<CEV<all humans>> an outright uFAI from our perspective but we certainly shouldn't count on it.

Comment deleted 25 March 2012 01:25:59AM *  [-]
Comment author: TimS 25 March 2012 02:58:17AM 0 points [-]

If you believe in moral progress (and CEV seems to rely on that position), then there's every reason to think that future-society would want to make changes to how we live, if future-society had the capacity to make that type of intervention.

In short, wouldn't you change the past to prevent the occurrence of chattel slavery if you could? (If you don't like that example, substitute preventing the October revolution or whatever example fits your preferences).

Comment deleted 05 August 2010 11:22:40AM [-]
Comment deleted 05 August 2010 11:41:58AM [-]
Comment author: jimrandomh 05 August 2010 02:11:01PM 0 points [-]

If you take this incident to its extreme, the important question is what people are willing to do in future based on the argument "it could increase the chance of an AI going wrong..."?

That is not the argument that caused stuff to be deleted from Less Wrong! Nor is it true that leaving it visible would increase the chance of an AI going wrong. The only plausible scenario where information might be deleted on that basis is if someone posted designs or source code for an actual working AI, and in that case much more drastic action would be required.

Comment deleted 05 August 2010 02:52:18PM [-]
Comment deleted 05 August 2010 03:09:32PM [-]
Comment author: wedrifid 05 August 2010 03:13:03PM 2 points [-]

I stand corrected.

Comment deleted 05 August 2010 11:01:28AM [-]
Comment author: katydee 06 August 2010 02:08:17AM -2 points [-]

Maybe you should read the comments in question before you make this sort of post?

Comment deleted 05 August 2010 07:50:27PM [-]
Comment deleted 05 August 2010 02:03:25PM [-]
Comment deleted 05 August 2010 02:41:29PM [-]
Comment deleted 05 August 2010 03:25:45PM [-]