Will_Newsome comments on How to avoid dying in a car crash - Less Wrong

75 Post author: michaelcurzi 17 March 2012 07:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (288)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 18 March 2012 06:27:07AM 1 point [-]

I am also confused. He was downvoted pretty quickly, perhaps because he was encouraging my skeptic-slandering or encouraging off-topic discussion?

Comment author: [deleted] 18 March 2012 06:28:28AM 3 points [-]

Yeah, but your response was (forgive me for saying uncharacteristically) coherent and reasonable.

Comment author: Will_Newsome 18 March 2012 06:42:47AM 4 points [-]

(Needless self-disclosing side comment:) I'm probably more coherent & reasonable because I'm buzzed & caffeinated and am commenting with the aim of conversing with people rather than the aim of avoiding culpability for not even having tried to warn people that they are making predictably-retrospectively-stupid mistakes. I don't normally have enough motivational resources to actually try to talk to people, alas.

Comment author: NancyLebovitz 19 March 2012 01:35:03PM 1 point [-]

When you were a kid, what sort of communication environment were you living in?

Comment author: Will_Newsome 19 March 2012 02:08:25PM *  2 points [-]

communication environment

Is this a technical term? Google isn't helping.

(I suspect that whatever motivational quirks I have are largely the result of genetic/neurological predisposition, at least up until around the age of 16 when I got a girlfriend, which is when my motivational system started to get really messed up. (Depressed girlfriend who was convinced I didn't love her; I had to meet impossible standards, had to reliably guess in advance which impossible standards I would be expected to have already met, that kinda thing. Resulted in (double-negative?-)obsession with things like this. But I'm not entirely sure what sort of data you're looking to get with your question.))

Comment author: NancyLebovitz 19 March 2012 03:20:00PM 2 points [-]

"Communication environment" is something I came up with on the fly, it's not a technical term, though maybe it should be.

I was thinking about how accuracy, trust, and kindness were handled around and with you when you were a kid.

The thing is, being straightforward with people is apparently very hard work for you, and I'm wondering whether straightforwardness was ignored and/or punished when you were forming basic emotional reactions.

Not the same problem you've got: transcript of Ira Glass talking with Mike Daisy. Daisy had done a substantial amount of lying about conditions in an Apple factory in China, and Ira Glass' radio show didn't do quite enough checking to catch it, so a bunch of falsehoods went out nationally.

What caught my eye was how impossible it was for Daisy to prioritize facts over emotional effects, and I wonder if an emotional pattern like that happens by accident.

It's possible that I'm underestimating neurological predisposition, though.

Comment author: Will_Newsome 19 March 2012 04:07:52PM 2 points [-]

transcript of Ira Glass talking with Mike Daisy.

That is a very fascinating case study in how people try to get out of double bind moral obligations where one has to choose between an explicit or an implicit lie, especially when negative sum signalling games have resulted in an equilibrium where explicitly telling the truth would subjectively seem as if it was almost guaranteed to convey zero or even negative information (and thus result in in the explicit-truth-teller's moral blameworthiness). I'm disappointed that Act Three wasn't explicitly about that, and that in Act Two Ira doesn't help Daisey explain the nature of the conflicting moral obligations... but I suppose that that angle would have gone over the heads of the listeners, and to most listeners it would have seemed as if Ira/NPR was trying to save face with philosophical mumbo-jumo, so Ira/NPR was forced to go the guilt-tripping route. It was really interesting anyway I guess.

I was thinking about how accuracy, trust, and kindness were handled around and with you when you were a kid.

I don't have any reason to suspect anything abnormal, but I have very little random access to my memories.

Comment author: Will_Newsome 19 March 2012 05:18:20PM 0 points [-]

In aqua veritas, in vino sanitas.

Comment author: wedrifid 18 March 2012 06:59:04AM *  0 points [-]

He was downvoted pretty quickly, perhaps because he was encouraging my skeptic-slandering or encouraging off-topic discussion?

Downvoted to -1 then back up at 0 when I edited/deleted/recommented. I attributed the early vote to just something personal against either one of us. We both get those from time to time but they tend to be averaged out given time - at least my ones do. Yours have a bit more weight behind them.

Comment author: Will_Newsome 18 March 2012 08:28:17AM 4 points [-]

I've noticed such user-specific downvotes tend to be a lot more common lately, not just for old folk like us but new folk too. E.g. User:ABrooks made a post about FAI that didn't fit in with local ideas, and consequently almost all of his comments were immediately downvoted. Only -1, but that's enough to significantly bias folks' intuitions about how charitable they should be when reading a comment. Various people have noticed weird voting patterns recently, normally in the form of heavy downvoting of seemingly relatively innocuous comments. I've also noticed that "yay our side, boo their side" comments tend to be very highly upvoted, moreso than a year or two ago. Nothing to do about it, but it might be worth a discussion posts along the lines of "LessWrong has become somewhat more stupid lately, don't take the downvotes too personally". But probably not. (It's not like LessWrong was ever that elite anyway; too much evaporative cooling which resulted in a lot of people who strongly agree with Eliezer even when he's wrong and even when they don't know why he's right. (I used to lean in that direction.) But it's still kinda sad; there aren't any publicly open alternatives.)

Comment author: wedrifid 18 March 2012 08:55:46AM *  2 points [-]

. User:ABrooks made a post about FAI that didn't fit in with local ideas, and consequently almost all of his comments were immediately downvoted.

Link? I don't see a post by him.

Edit: Found it. It's one I downvoted but without it having enough impact on me to even remember that ABrooks is a user. I believe I stopped reading after the first couple of paragraphs after it introduce a premise that seemed fundamentally absurd. Something to do with it being not being theoretically possible to create an AI without teaching it to think through interaction. (I mean... what? Identify the thing that is an AI after it has been taught to think then combine bits of matter in such a way that you have that AI. Basic physical reductionism!)

I'm a little surprised that he got mass downvoted (ie. of other comments, not that particular post). For that matter I'm a little surprised that the specific post got significantly downvoted. Usually things far more stupid than that stay positive*. Did he get into personal bickering with a specific individual at all? That's what I usually associate with mass downvotes.

* "Usually things far more stupid than that stay positive" of course really means "of posts that are far more stupid than that immediately spring to my mind most are those that are not downvoted."

Comment author: Eugine_Nier 18 March 2012 05:54:05PM 1 point [-]

Something to do with it being not being theoretically possible to create an AI without teaching it to think through interaction. (I mean... what? Identify the thing that is an AI after it has been taught to think then combine bits of matter in such a way that you have that AI. Basic physical reductionism!)

Well, one could make a computational complexity argument that there is no way to " Identify the thing that is an AI after it has been taught to think" other then actually interacting with it.

Comment author: TobyBartels 31 March 2012 03:47:20AM 0 points [-]

Sure, but once you do so, then you can build another that you didn't interact with.

On the other hand, how much variation do you need to introduce before you can declare that the second copy is a different intelligence than the one that you copied it from? And how sure can you be that it's still an AI after this variation? So there's an argument to be made there, although I'm far from convinced for now.

Comment author: wedrifid 18 March 2012 08:49:33AM *  0 points [-]

Nothing to do about it, but it might be worth a discussion posts along the lines of "LessWrong has become somewhat more stupid lately, don't take the downvotes too personally".

I have the reverse message. I say be willing just take them personally when appropriate. I don't really mind people having a personal problem with me but if people sincerely negatively evaluate comments that I consider high quality then that distresses me. After all if I hear "Fuck you! You're a dick." then the subject matter is subjective and they may have a point. If I hear "You're wrong!" then I may, after double checking, actually have to evaluate the accuser as being poor at thinking. Too much of that just leads to contempt and bitterness.

It's not like LessWrong was ever that elite anyway; too much evaporative cooling which resulted in a lot of people who strongly agree with Eliezer even when he's wrong and even when they don't know why he's right.

(Old-Timer Topper:) That's nutthin! Do you actually think kids these days have read enough rationality literature - from Eliezer or otherwise - for them to be able to even know which beliefs to take on faith without knowing why? I don't see much in the way of (correct) application of rationality principles for me to be declaring as done without basis.