JGWeissman comments on Best career models for doing research? - Less Wrong

27 Post author: Kaj_Sotala 07 December 2010 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (999)

You are viewing a single comment's thread. Show more comments above.

Comment deleted 09 December 2010 06:09:42PM *  [-]
Comment author: [deleted] 09 December 2010 06:14:28PM 6 points [-]

Most people wouldn't dispute the first half of your comment. What they might take issue with is this:

Yes, that means we have to trust Eliezer.

The problem is that we have to defer to Eliezer's (and, by extension, SIAI's) judgment on such issues. Many of the commenters here think that this is not only bad PR for them, but also a questionable policy for a "community blog devoted to refining the art of human rationality."

Comment author: JGWeissman 09 December 2010 06:25:32PM *  6 points [-]

Most people wouldn't dispute the first half of your comment. What they might take issue with is this:

Yes, that means we have to trust Eliezer.

If you are going to quote and respond to that sentence, which anticipates people objecting to trusting Eliezer to make those judgments, you should also quote and repond to my response to that anticipation (ie, the next sentence):

But I have no reason to doubt Eliezer's honesty or intelligence in forming those expectations.

Also, I am getting tired of objections framed as predictions that others would make the objections. It is possible to have a reasonable discussion with people who put forth their own objections, explain their own true rejections, and update their own beleifs. But when you are presenting the objections you predict others will make, it is much harder, even if you are personally convinced, to predict that these nebulous others will also be persuaded by my response. So please, stick your own neck out if you want to complain about this.

Comment author: [deleted] 09 December 2010 06:33:31PM 2 points [-]

If you are going to quote and respond to that sentence, which anticipates people objecting to trusting Eliezer to make those judgments, you should also quote and repond to my response to that anticipation (ie, the next sentence)

That's definitely a fair objection, and I'll answer: I personally trust Eliezer's honesty, and he is obviously much smarter than myself. However, that doesn't mean that he's always right, and it doesn't mean that we should trust his judgment on an issue until it has been discussed thoroughly.

Also, I am getting tired of objections framed as predictions that others would make the predictions.

I agree. The above paragraph is my objection.

Comment author: JGWeissman 09 December 2010 07:01:56PM 1 point [-]

However, that doesn't mean that he's always right, and it doesn't mean that we should trust his judgment on an issue until it has been discussed thoroughly.

The problem with a public thorough discussion in these cases is that once you understand the reasons why the idea is dangerous, you already know it, and don't have the opportunity to choose whether to learn about it.

If you trust Eliezer's honesty, then though he may make mistakes, you should not expect him to use this policy as a cover for banning posts as part of some hidden agenda.

Comment author: [deleted] 09 December 2010 07:05:35PM *  3 points [-]

The problem with a public thorough discussion in these cases is that once you understand the reasons why the idea is dangerous, you already know it, and don't have the opportunity to choose whether to learn about it.

That's definitely the root of the problem. In general, though, if we are talking about FAI, then there shouldn't be a dangerous idea. If there is, then it means we are doing something wrong.

If you trust Eliezer's honesty, then though he may make mistakes, you should not expect him to use this policy as a cover for banning posts as part of some hidden agenda.

I don't think he's got a hidden agenda; I'm concerned about his mistakes. Though I'm not astute enough to point them out, I think the LW community as a whole is.

Comment author: JGWeissman 09 December 2010 07:17:09PM 3 points [-]

In general, though, if we are talking about FAI, then there shouldn't be a dangerous idea.

I have a response to this that I don't actually want to say, because it could make the idea more dangerous to those who have heard about it but are currently safe due to not fully understanding it. I find that predicting that this sort of thing will happen makes me reluctant to discuss this issue, which may explain why of those who are talking about it, most seem to think the banning was wrong.

I don't think he's got a hidden agenda; I'm concerned about his mistakes.

Given that there has been one banned post. I think that his mistakes are much less of a problem than overwrought concern about his mistakes.

Comment author: [deleted] 09 December 2010 07:19:59PM 1 point [-]

If you have a reply, please PM me. I'm interested in hearing it.

Comment author: JGWeissman 09 December 2010 07:24:04PM 1 point [-]

Are you interested in hearing it if it does give you a better understanding of the dangerous idea that you then realize is in fact dangerous?

Comment author: [deleted] 09 December 2010 08:41:06PM 0 points [-]

It may not matter anymore, but yes, I would still like to hear it.

Comment author: Vladimir_Nesov 09 December 2010 07:09:57PM *  2 points [-]

In general, though, if we are talking about FAI, then there shouldn't be a dangerous idea. If there is, then it means we are doing something wrong.

Why do you believe that? FAI is full of potential for dangerous ideas. In its full development, it's an idea with the power to rewrite 100 billion galaxies. That's gotta be dangerous.

Comment author: [deleted] 09 December 2010 07:15:14PM 8 points [-]

Let me try to rephrase: correct FAI theory shouldn't have dangerous ideas. If we find that the current version does have dangerous ideas, then this suggests that we are on the wrong track. The "Friendly" in "Friendly AI" should mean friendly.

Comment author: Eliezer_Yudkowsky 09 December 2010 07:20:10PM 8 points [-]

Pretty much correct in this case. Roko's original post was, in fact, wrong; correctly programmed FAIs should not be a threat.

Comment author: Vladimir_Nesov 09 December 2010 07:25:12PM 10 points [-]

(FAIs shouldn't be a threat, but a theory to create a FAI will obviously have at least potential to be used to create uFAIs. FAI theory will have plenty of dangerous ideas.)

Comment author: XiXiDu 09 December 2010 07:40:41PM 5 points [-]

I want to highlight at this point how you think about similar scenarios:

I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.

That isn't very reassuring. I believe that if you had the choice of either letting a Paperclip maximizer burn the cosmic commons or torture 100 people, you'd choose to torture 100 people. Wouldn't you?

...correctly programmed FAIs should not be a threat.

They are always a threat to some beings. For example beings who oppose CEV or other AI's. Any FAI who would run a human version of CEV would be a potential existential risk to any alien civilisation. If you accept all this possible oppression in the name of what is subjectively friendliness, how can I be sure that you don't favor torture for some humans that support CEV, in order to ensure it? After all you already allow for the possibility that many beings are being oppressed or possible killed.

Comment deleted 09 December 2010 11:23:52PM *  [-]
Comment deleted 09 December 2010 08:08:04PM *  [-]
Comment author: Jack 09 December 2010 07:21:18PM 1 point [-]

This is certainly the case with regard to the kind of decision theoretic thing in Roko's deleted post. I'm not sure if it is the case with all ideas that might come up while discussing FAI.

Comment author: JGWeissman 09 December 2010 11:08:53PM 2 points [-]

The above deleted comment referenced some details of the banned post. With those details removed, it said:

(Note, this comment reacts to this thread generally, and other discussion of the banning)

The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared.

I realize that you are describing how people generally react to this sort of thing, but this knee jerk stupid reaction is one of the misapplied heurestics we ought to be able notice and overcome.

So far, one post has been forbidden (not counting spam).

It was not forbidden because it criticized SIAI, other posts have criticized SIAI and were not banned.

It was not forbidden because it discussed torture, other posts have discussed torture and were not banned.

It was not forbidden for being inflammatory, other posts have been inflammatory and where not banned.

It was forbidden for being a Langford Basilisk.

Comment author: David_Gerard 09 December 2010 11:21:17PM 2 points [-]

Strange LessWrong software fact: this showed up in my reply stream as a comment consisting only of a dot ("."), though it appears to be a reply to a reply to me.

Comment author: JGWeissman 09 December 2010 11:31:16PM *  0 points [-]

It also shows up on my user page as a dot. Before I edited it to be just a dot, it showed up in your comment stream and my user page with the original complete content.