Will_Newsome comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 21 August 2011 11:46:14AM *  0 points [-]

I think the problem here is that your posting style, to be frank, often obscures your point.

I acknowledge this. But it seems to me that the larger problem is that Eliezer simply doesn't know how to read what people actually say. Less Wrong mostly doesn't either, and humans in general certainly don't. This is a very serious problem with LW-style rationality (and with humanity). There are extremely talented rationalists who do not have this problem; it is an artifact of Eliezer's psychology and not of the art of rationality.

Comment author: Rain 21 August 2011 02:08:28PM 10 points [-]

It's hardly fair to blame the reader when you've got "sentences" like this:

I think that my original comment was at roughly the most accurate and honest level of vagueness (i.e. "aimed largely [i.e. primarily] at doing the technical analysis necessary to determine as well as possible the feasibility and difficulty [e.g. how many Von Neumanns, Turings, and/or Aristotles would it take?] of Friendly AI for various (logical) probabilities of Friendliness [e.g. is the algorithm meta-reflective enough to fall into (one of) some imagined Friendliness attractor basin(s)?]").

Comment author: Will_Newsome 21 August 2011 02:19:24PM 0 points [-]

That was the second version of the sentence, the first one had much clearer syntax and even italicized the answer to Eliezer's subsequent question. It looks the way it does because Eliezer apparently couldn't extract meaning out of my original sentence despite it clearly answering his question, so I tried to expand on the relevant points with bracketed concrete examples. Here's the original:

If I had a viable preliminary Friendly AI research program, aimed largely at doing the technical analysis necessary to determine as well as possible the feasibility and difficulty of Friendly AI for various values of "Friendly" [...]

(emphasis in original)

Comment author: Rain 21 August 2011 03:58:56PM *  2 points [-]

Which starts with the word 'if' and fails to have a 'then'.

If you took out 'If I had' and replaced it with 'I would create', then maybe it would be more in line with what you're trying to say?

Comment author: Kaj_Sotala 21 August 2011 08:14:20PM 5 points [-]

Certainly true, but that only means that we need to spend more effort on being as clear as possible.

Comment author: katydee 21 August 2011 12:14:46PM *  4 points [-]

If that's indeed the case (I haven't noticed this flaw myself), I suggest that you write articles (or perhaps commission/petition others to have them written) describing this flaw and how to correct it. Eliminating such a flaw or providing means of averting it would greatly aid LW and the community in general.

Comment author: Will_Newsome 21 August 2011 12:30:43PM *  -2 points [-]

Unfortunately that is not currently possible for many reasons, including some large ones I can't talk about and that I can't talk about why I can't talk about. I can't see any way that it would become possible in the next few years either. I find this stressful; it's why I make token attempts to communicate in extremely abstract or indirect ways with Less Wrong, despite the apparent fruitlessness. But there's really nothing for it.

Unrelated public announcement: People who go back and downvote every comment someone's made, please, stop doing that. It's a clever way to pull information cascades in your direction but it is clearly an abuse of the content filtering system and highly dishonorable. If you truly must use such tactics, downvoting a few of your enemy's top level posts is much less evil; your enemy loses the karma and takes the hint without your severely biasing the public perception of your enemy's standard discourse. Please.

(I just lost 150 karma points in a few minutes and that'll probably continue for awhile. This happens a lot.)

Comment author: NancyLebovitz 21 August 2011 01:55:12PM 5 points [-]

I'm sorry to hear that you're up against something so difficult, and I hope you find a way out.

Comment author: Will_Newsome 21 August 2011 02:25:48PM 1 point [-]

Thank you... I think I just need to be more meta. Meta never fails.

Comment author: katydee 21 August 2011 09:45:23PM 7 points [-]

I'm not a big fan of the appeal to secret reasons, so I think I'm going to have pull out of this discussion. I will note, however, that you personally seem to be involved in more misunderstandings than the average LW poster, so while it's certainly possible that your secret reasons are true and valid and Eliezer just sucks at reading or whatever, you may want to clarify certain elements of your own communication as well.

I unfortunately predict that "going more meta" will not be strongly received here.

Comment author: Nisan 21 August 2011 10:31:33PM 7 points [-]

Unfortunately that is not currently possible for many reasons, including some large ones I can't talk about and that I can't talk about why I can't talk about.

Why can't you talk about why you can't talk about them?

Comment author: [deleted] 21 August 2011 01:51:46PM 5 points [-]

Unfortunately that is not currently possible for many reasons, including some large ones I can't talk about and that I can't talk about why I can't talk about.

Are we still talking about improving general reading comprehension? What could possibly be dangerous about that?