XiXiDu comments on Open Thread: July 2010, Part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (770)
Something really crazy is going on here.
You people have fabricated a fantastic argument for all kinds of wrongdoing and idiot decisions, "it could increase the chance of an AI going wrong...".
"I deleted my comment because it was maybe going to increase the chance of an AI going wrong..."
"Hey, I had to punch that guy in the face, he was going to increase the chance of an AI going wrong by uttering something stupid..."
"Sorry, I had to exterminate those people because they were going to increase the chance of an AI going wrong."
I'm beginning to wonder if not unfriendly AI but rather EY and this movement might be the bigger risk.
Why would I care about some feverish dream of a galactic civilization if we have to turn into our own oppressor and that of others? Screw you. That’s not what I want. Either I win like this, or I don’t care to win at all. What’s winning worth, what’s left of a victory, if you have to relinquish all that you value? That’s not winning, it’s worse than losing, it means to surrender to mere possibilities, preemptively.
This is why deleting the comments was the bigger risk: doing so makes people think (incorrectly) that EY and this movement are the bigger risk, instead of unfriendly AI.
The problem is, are you people sure you want to take this route? If you are serious about all this, what would stop you from killing a million people if your probability estimates showed that there was a serious risk posed by those people?
If you read this comment thread you'll see what I mean and what danger there might be posed by this movement, 'follow Eliezer', 'donating as much as possible to SIAI', 'kill a whole planet', 'afford to leave one planet's worth', 'maybe we could even afford to leave their brains unmodified'...lesswrong.com sometimes makes me feel more than a bit uncomfortable, especially if you read between the lines.
Yes, you might be right about all the risks in question. But you might be wrong about the means of stopping the same.
I'm not sure if this was meant for me; I agree with you about free speech and not deleting the posts. I don't think it means EY and this movement are a great danger, though. Deleting the posts was the wrong decision, and hopefully it will be reversed soon, but I don't see that as indicating that anyone would go out and kill people to help the Singularity occur. If there really were a Langford Basilisk, say, a joke that made you die laughing, I would want it removed.
As to that comment thread: Peer is a very cool person and a good friend, but he is a little crazy and his beliefs and statements shouldn't be taken to reflect anything about anyone else.
I know, it wasn't my intention to discredit Peer, I quite like his ideas. I'm probably more crazy than him anyway.
But if I can come up with such conclusions, who else will? Also, why isn't anyone out to kill people, or will be? I'm serious, why not? Just imagine EY found out that we can be reasonable sure, for example Google, would let loose a rogue AI soon. Given how the LW audience is inclined to act upon 'mere' probability estimates, how wouldn't it be appropriate to bomb Google, given that was the only way to stop them in due time from turning the world into a living hell? And how isn't this meme, given the right people and circumstances, a great danger? Sure, me saying EY might be a greater danger was nonsense, just said to provoke some response. By definition, not much could be worse than uFAI.
This incident is simply a good situation to extrapolate. If a thought-experiment can be deemed to be dangerous enough to be not just censored and deleted but for people to be told not even to seek any knowledge of it, much less discuss it, I'm wondering about the possible reaction to a imminent and tangible danger.
Heh, that makes Roko's scenario similar to the Missionary Paradox: if only those who know about God but don't believe go to hell, it's harmful to spread the idea of God. (As I understand it, this doesn't come up because most missionaries think you'll go to hell even if you don't know about the idea of God.)
But I don't think any God is supposed to follow a human CEV; most religions seem to think it's the other way around.
The more recent analysis I've read says that people pretty much become suicide bombers for nationalist reasons, not religious reasons.
I suppose that "There should not be American bases on the sacred soil of Saudi Arabia" is a hybrid of the two, and so might be "I wanted to kill because Muslims were being hurt"-- it's a matter of group identity more than "Allah wants it".
I don't have specifics for the 9/11 bombers.
Either I misunderstand CEV, or the above statement re: the Abrahamic god following CEV is false.
Coherent Extrapolated Volition
This is exactly the argument religious people use to excuse any shortcomings of their personal FAI. Namely, their personal FAI knows better than you what's best for your AND everyone else.
What average people do is follow what is being taught here on LW. They decide based on their prior. Their probability estimates tell them that their FAI is likely to exist and make up excuses for extraordinary decisions based on the possible existence of it. That is, support their FAI while trying to inhibit other uFAI all in the best interest of the world at large.
Yahweh and the associated moral system are far from incomprehensible if you know the cultural context of the Israelites. It's a recognizably human morality, just a brutal one obsessed with purity of various sorts.
The link and quotation you posted do not seem to back up your argument that the Abrahamic god follows CEV. Could you clarify?
If an AI does what Roko suggested, it's not friendly. We don't know what, if anything, CEV will output, but I don't see any reason to think CEV would enact Roko's scenario.
Roko thinks (or thought) it would. I do too. Can't argue it in detail here, sorry.
Until about a month ago, I would have agreed, but some posts I have since read on LW made me update the probability of CEV wanting that upwards.
Really, please explain (or PM me if it would require breaking the gag rule on Roko's scenario). Why would CEV want that?
Really, please explain (or PM me if it would require breaking the gag rule on Roko's scenario). Why would CEV want that?
Because 'CEV' must be instantiated on a group of agents (usually humans). Some humans are assholes. So for some value of aGroup, CEV<aGroup> does assholish things. Hopefully the group of all humans doesn't create a CEV that makes FAI<CEV<all humans>> an outright uFAI from our perspective but we certainly shouldn't count on it.
If you believe in moral progress (and CEV seems to rely on that position), then there's every reason to think that future-society would want to make changes to how we live, if future-society had the capacity to make that type of intervention.
In short, wouldn't you change the past to prevent the occurrence of chattel slavery if you could? (If you don't like that example, substitute preventing the October revolution or whatever example fits your preferences).
That is not the argument that caused stuff to be deleted from Less Wrong! Nor is it true that leaving it visible would increase the chance of an AI going wrong. The only plausible scenario where information might be deleted on that basis is if someone posted designs or source code for an actual working AI, and in that case much more drastic action would be required.
I stand corrected.
Maybe you should read the comments in question before you make this sort of post?