You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vaniver comments on Quixey Challenge - Fix a bug in 1 minute, win $100. Refer a winner, win $50. - Less Wrong Discussion

6 Post author: Liron 19 January 2012 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 20 January 2012 12:50:42AM 7 points [-]

Hm. This may be fuzzy memory on my part, but I thought I remembered downvoting this post, seeing it at -5, and now it's at 0 and I haven't downvoted it. I really hope that's fuzzy memory on my part.

Comment author: satt 21 January 2012 06:37:41PM 4 points [-]

If I upvote/downvote comments on an LW page, then close the page a few moments afterwards, sometimes my votes don't register (they're not there if I visit the same page later). If something similar happened to you that might explain why your vote seemed to disappear.

Comment author: Nick_Tarleton 21 January 2012 07:35:50PM *  4 points [-]

I have also seen this bug many times.

This comment thread strikes me as a good example of an anti-pattern I've seen before, that I don't know a name for (close to, but not exactly, privileging the hypothesis), where a conversation slides without explicit comment from reasonably suggesting a bad-case possibility to taking it for granted for no apparent reason.

(disclaimer: I work for Quixey, conflict of interest and all that, but I'm pretty sure I'd be making this exact same comment if I didn't)

Comment author: Vaniver 21 January 2012 11:57:34PM 2 points [-]

That is good to know. I suspect the probability that I closed the page shortly thereafter is only about .2 or so, but that's significantly higher than the prior I put on the LW editing staff removing downvotes, which has significantly decreased my worry.

Comment author: Anubhav 20 January 2012 08:25:13AM *  16 points [-]

That was an alternate universe. As this post was heavily downvoted, hardly any LWers took the challenge, depriving SIAI of the money they'd have gotten by referring successful challengers. Also, because of information cascades, the same thing happened to all future Quixey posts, leading Quixey to eventually stop posting here. Because of the negative word-of-mouth from such incidents, people stopped looking at the LW audience as a set of eyeballs to monetise.

Consequently, the SIAI was deprived of all potential advertising income, and lacked the budget to perfect FAI theory in time. Meanwhile, the Chinese government, after decades of effort, managed to develop a uFAI. Vishwabandhu Gupta of India managed to convince his countrymen that an AI is some sort of intelligence-enhancing ayurvedic wonder-drug that the Chinese had illegally patented. Consequently, the Indians eagerly invaded China, believing that increased intelligence would allow their kids to get into good colleges. This localised conflict blew up into the Artilect War, which killed everyone in the planet.

So please... don't do that again. Just don't. I'm tired of having to travel to an alternate universe every time that happens.

Comment author: Vaniver 20 January 2012 03:00:50PM 3 points [-]

So please... don't do that again. Just don't.

By not wanting advertising on LW, I have doomed humanity? Your sense of perspective is troubling. (You should also be ashamed of the narrative fallacy that follows.)

If the LW community's votes are being overridden somehow, I would at least like the LW editors to be honest about it.

Comment author: Anubhav 21 January 2012 07:51:57AM 0 points [-]

Your sense of perspective is troubling.

Because, clearly, it is impossible for something as huge as millions of lives to depend on a an Art academy's decision.

You should also be ashamed of the narrative fallacy that follows

O RLY?

Comment author: Vaniver 21 January 2012 11:47:13PM 2 points [-]

Because, clearly, it is impossible for something as huge as millions of lives to depend on a an Art academy's decision.

Imagine, with every rejection letter the dean of admissions sends out, he has a brief moment of worry: "is this letter going to put someone on the path to becoming a mass murderer?" His sense of perspective would also be troubling, as his ability to predict the difference acceptance will have on his students' lives is insufficient to fruitfully worry about those sorts of events. It's not a statement of impossibility, it's a statement of improbability. Giving undue weight to the example of Hitler is availability bias.

O RLY?

Yes, really. I presume you've read about fictional evidence and the conjunction fallacy? If you want to argue that LW's eyeballs should be monetized, argue that directly! We'll have an interesting discussion out in the open. But assuming that LW's eyeballs should be monetized because you can construct a story in which a few dollars makes the difference between the SIAI succeeding and failing is not rational discourse. Put probabilities on things, talk about values, and we'll do some calculations.

Comment author: Anubhav 22 January 2012 02:00:53AM 3 points [-]

But assuming that LW's eyeballs should be monetized because you can construct a story in which a few dollars makes the difference between the SIAI succeeding and failing is not rational discourse.

I'd have thought that the story being as far-fetched and ludicrous as it is would've made it obvious that I was just fooling around, not making an argument. Apparently that's not actually the case.

My apologies if I accidentally managed to convince someone of the necessity of monetizing LW's eyeballs.

Comment author: Vaniver 22 January 2012 07:53:02PM 2 points [-]

I completely misunderstood your post, then. My apologies as well.

Comment author: Clippy 20 January 2012 04:20:49PM -1 points [-]

As do I. While it is slightly distracting if LessWrong administrators are giving certain posts preferential treatment against community wishes, it is extremely worrisome if an attacker has convinced them to actually falsify voting records, and indicative of a particularly insidious social engineering attack.

Comment author: wedrifid 20 January 2012 04:56:44PM *  0 points [-]

Hm. This may be fuzzy memory on my part, but I thought I remembered downvoting this post, seeing it at -5, and now it's at 0 and I haven't downvoted it. I really hope that's fuzzy memory on my part.

Downvoted the post based on the intervention you described. Normally I'd have upvoted.

Comment author: Vaniver 20 January 2012 05:31:36PM 1 point [-]

I do want to stress that I'm not certain I downvoted the post before I wrote this comment. It's plausible that 5 people upvoted the post because they wanted it to be visible. That's still an intervention I'm uneasy about, but the unease is much lower.

Comment author: DanielVarga 20 January 2012 08:49:43PM *  6 points [-]

At least one of those five people does exist. That's me, who found the post at -5 and left it at -4.

Comment author: Anubhav 21 January 2012 07:34:05AM 3 points [-]

Seconded. Found it at -1, upvoted to 0. And it's at -3 now...

Comment author: wedrifid 20 January 2012 05:38:39PM 2 points [-]

That's still an intervention I'm uneasy about, but the unease is much lower.

What intervention remains if the votes were not distorted?

Comment author: Vaniver 20 January 2012 09:48:40PM -1 points [-]

Basically, if anyone was asked to vote the post up, rather than seeing the post and thinking "I want more of this on LW." I apologize for not making that implication clearer. I've only seen this post at 0 or negative karma (but I'm not tracking it closely), which seems to me like people not wanting it to be negative rather than roughly equal groups liking and disliking it.

Comment author: jsteinhardt 22 January 2012 02:15:47AM 2 points [-]

I upvoted the post because it had negative karma, and was not a post that I thought should be at negative karma.

In general I vote posts/comments in the direction I think their karma should be at. Thus for instance I downvoted Clippy's comment above because I did not think it was so insightful that it merited 20+ karma. I would not have downvoted it if it were at 0 karma.

I assume many people take this approach (it fits in nicely with consequentialism) so this probably explains what you saw.