Comment author: David_Gerard 28 October 2012 11:04:03AM *  1 point [-]

Well, for one thing, Jade appears to be a "she". But never mind, I'm sure it'll all work out fine.

Comment author: KPier 28 October 2012 08:34:45PM 4 points [-]

Fixed, sorry! (I'm female and that mistake doesn't bother me at all, but I know it really annoys some people. I'll be more careful in future.)

I completely agree that characterizing RW as contributing to existential risk is absurd.

Comment author: David_Gerard 28 October 2012 08:50:20AM *  4 points [-]

This article is a response to this comment, which was actually mostly about this comment. Posting an entire article in response to half of that comment does strike me as an overreaction. (I'd be interested in Konkvistador's similar-length response to Jade's comment, though; there's a body of work there raising quite apposite concerns about problems with LW as a social environment - specifically, the existing real world problem of creepers at LW meetups - that won't disappear by merely downvoting them.)

Comment author: KPier 28 October 2012 09:35:14AM *  1 point [-]

Thanks for linking to the context! In fairness, though, if people are citing RationalWiki as proof that LessWrong has a "reputation", then devoting a discussion-level post to it doesn't strike me as excessive.

(On a related note: I hadn't read Jade's comments, but I did after you flagged them as interesting; they struck me as totally devoid of value. Would you mind explaining what you think the valid concern he/she's expressing is?)

Comment author: David_Gerard 27 October 2012 11:39:53PM *  9 points [-]

Two guys, fwiw. LW burnouts have also been showing up. Many RW regulars quite like LW (and particularly Yvain), though the apparently-silly bits are in fact regarded as silly.

The only reason this article we're commenting on exists is because RW - which is piss-insignificant - is the only place on the Internet that pays LW even that much attention. Insofar as this is a problem, the problem is that no-one else pays LW even that much attention. (Of course, the question is then whether LW actually wants that attention, because press coverage in general is fundamentally shit and is really not worth touting for unless you have an actual thing you want it to publicise.)

LW paying RW this much attention while also claiming that the entire future of human value itself is at stake looks on the surface like a failure of apportionment of cognitive resources, but perhaps I've missed something.

Comment author: KPier 28 October 2012 06:38:32AM 5 points [-]

LW paying RW this much attention while also claiming that the entire future of human value itself is at stake looks on the surface like a failure of apportionment of cognitive resources, but perhaps I've missed something.

What do you mean by "this much attention"? If Konkvistador's links at the top are reasonably comprehensive (and a quick search doesn't turn up much more), there have been 2 barely-upvoted discussion posts about RW in four years, which hardly seems like much attention. For comparison, LW has devoted several times as much energy to dating advice.

Is there a lot of discussion of RW that I'm missing, or are you claiming that even two posts in Discussion is totally excessive?

Comment author: thomblake 24 October 2012 03:45:53PM 2 points [-]

For the curious, you should be indifferent to one- or two-boxing when Omega predicts your response 50.05% of the time. If Omega is just perceptibly better than chance, one-boxing is still the way to go.

Now I wonder how good humans are at playing Omega.

Comment author: KPier 24 October 2012 11:29:48PM 0 points [-]

... and if your utility scales linearly with money up to $1,001,000, right?

Comment author: Pablo_Stafforini 11 October 2012 12:05:11AM *  11 points [-]

Thanks for the feedback. I wondered whether I should post a link to the Pew study. I decided in favor of it because I assumed that news about changes in religious belief over time would be of interest to rationalists, but if others agree that articles of this sort do not belong here I'd be happy to remove it. Thanks again.

Comment author: KPier 11 October 2012 02:04:49AM 9 points [-]

I don't think there's anything wrong with the topic, if it comes with a little bit of discussion along the lines of palladius's comment below, or along the lines of "What evidence would convince us that the sanity waterline is actually rising, as opposed to just more people being raised non-religious?"

It would be very interesting to see this study in the context of trendlines for other popular sanity-correlated topics, such as belief in evolution, disbelief in ghosts, non-identification with a political party, knowledge about GMOs, etcetera, even though there are lots and lots of confounding variables.

One alone, though, without commentary about rationality, probably does not belong on LessWrong.

Comment author: [deleted] 06 September 2012 03:53:40PM *  0 points [-]

Eliezer2000 is starting to think inside the black box. His reasons for pursuing this course of action—those don't matter at all. link

When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI. His reasons for doing this don't matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism. link

That's two instances of Eliezer placing no moral value "at all" on his own motives in his pursuit of the motive of AI morals. Not necessarily a contradiction, but less elegant than might be.

Comment author: KPier 07 September 2012 01:48:53AM 1 point [-]

I don't think he's saying that motives are morally irrelevant - I think he's saying that they are irrelevant to the point he is trying to make with that blog post.

Comment author: Epiphany 03 September 2012 08:18:00PM *  -1 points [-]

Imagine you have 100 instances where you do a bunch of research, with the intention of having an unbiased view of the situation. Then you tell somebody about the result and they don't agree. But they don't support their points well. So you share the information you found and point out that their points were unsupported. They fail to produce any new information or points that actually add to the conversation. You may not have been trying to win, but if they're unable to support their points or supply new information and yet believe themselves to be right, when you destroy that illusion, the feeling of "oh I guess I was right" is a natural result.

Imagine that during the same period of time, this happens to you zero times. Nobody finds a logical fallacy or poorly supported point. This is not because you are perfect - you aren't. It is probably due to hanging out with the wrong people - people who are not dedicated to reasoning well. Knowing I am not perfect is not reducing the cockiness that is starting to result from this, for me. It is making me nervous instead - this knowledge that I am not perfect has become a vague intellectual acknowledgement, not a genuine sense of awareness. The sense that I have flawed ideas and could be wrong at any time no longer feels real.

Now that I am in a much bigger pond, I am hoping to experience a really good ass kicking. I want to wake up from this dream of feeling like I'm right all the time.

The reason I want to lose is because I agree with you that I shouldn't see these debates as thing for me to win. I am tired of the experience of being right. I am tired of the nervousness that is knowing I am imperfect, that there are flaws I'm unaware of, but not having the sense that somebody will point them out.

I just want to experience being wrong sometimes.

Comment author: KPier 03 September 2012 10:35:00PM 3 points [-]

I just want to experience being wrong sometimes.

Your comments are consistent with wanting to be proved wrong. No one experiences "being wrong" - from the inside, it feels exactly like "being right". We do experience "realizing we were wrong", which is hopefully followed by updating so that we once again believe ourselves to be right. Have you never changed your mind about something? Realized on your own that you were mistaken? Because you don't need to "lose" or to have other people "beat you" to experience that.

And if you go around challenging other people about miscellaneous points in the hopes that they will prove you wrong, this will annoy the other people and is unlikely to give you the experience you hoped for.

I also think that your definition of "being wrong" might be skewed. If you try to make comments which you think will be well-received, then every comment that has been heavily downvoted is an instance in which you were wrong about the community reaction. You apparently thought most people were concerned about an Eternal September; you've already realized that this belief was wrong. I'm not sure why being wrong about these does not have the same impact on you as being wrong about the relative fighting skills of programmers and fruit-pickers, but it probably should have a bigger impact, since it's a more important question.

Comment author: Epiphany 03 September 2012 07:58:05PM *  0 points [-]

Aww. You didn't nail me.

I did some research to see whether this might be right, here it is:

"America has become a nation of spectators. The latest statistics from the National Centers for Disease Control and Prevention (CDC) tell the tale: 29% of adults are entirely sedentary and another 46% don’t get enough physical activity. That means only a quarter of all Americans get the exercise they need.

The real situation may be even worse. Most people who say they exercise report walking as their only regular physical activity, but when researchers from the CDC evaluated more than 1,500 people who said they were walkers, they found that only 6% walked often enough, far enough, or briskly enough to meet the current standards for health. Even people who report intense activity often overstate their efforts. Scientists from the University of Florida asked people to keep a log of their physical activities for a full week while they were hooked up to ambulatory heart monitors. Some 47% of the subjects reported that they had engaged in moderate activity, but only 15% actually boosted their heart rates enough to sustain moderate activity. The gap was just as great for more intense exercise: 11% reported hard activity, but only 1.5% boosted their heart rates to that level. Nobody achieved a heart rate consistent with very hard activity, though 1.5% made that claim.

“Spectator” is a kind word for it; in fact, we are a nation of couch potatoes."

Harvard Men's Health Watch, May 2004 issue

It looks like I won here, but I thought of some reasons why I may still have lost:

  1. Females can be as big as males, and I'm sure that some have the muscle building bonuses comparable to the average male, but from what I've read and observed, males are more likely to have these benefits than females. Females can have the aggressive tendencies associated with testosterone, but do not get them as frequently as males do. Females can be nerds but most nerds are male. Food pickers may have a higher percentage of females than nerds do. Therefore the food pickers might be at a disadvantage in unarmed combat. (Though adding guns would change that completely.)

  2. Nerds may exercise more than the average person in order to compensate for the stereotype that nerds are weak. I didn't see any research specific to how much exercise nerds do or what type they use, but it is possible that this group is more fit than average.

  3. Having a nerdy personality may make them more likely to research the best way of exercising, and measure their progress, making exercise more effective for them.

Do you see more factors that we haven't taken into account?

Comment author: KPier 03 September 2012 08:04:28PM 4 points [-]

It looks like I won here, but I thought of some reasons why I may still have lost:

You should stop thinking about discussions in these terms.

Comment author: shminux 31 August 2012 05:55:40PM *  7 points [-]

Warning: a rant follows!

The general incompetence of the replies to the OP is appalling. Fantastically complicated solutions with many potential harmful side effects are offered and defended. My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down. This reminds me of the many pieces of software I had a misfortune to browse the source code of: half-assed solutions patched over and over to provide some semblance of the desired functionality without breaking into pieces.

For comparison, I have noted a trivial low-risk one-line patch that would fix a potential exploit in the recent (and also easy to implement) anti-troll feature: paying with 5 karma to reply to comments downvoted to -3 or lower (patch: only if the author has negative 30-day karma). Can you do cheaper and better? if not, why bother suggesting something else?

After a long time in the software business, one of the lessons I have learned (thanks, Steve McConnell) is that every new feature can be implemented cheaply or expensively, with very little effect on its utility. Unfortunately, I have not heard of any university teaching design simplification beyond using some Boolean algebra (and even that trivial bit is largely ignored by the programmers, who tend to insert a bunch of ad hoc nested if statements rather than think through the possible outcomes and construct and minimize the relevant CNF or DNF). There is also no emphasis on complexity leading to fragility, and how spending 5 minutes thinking through solutions can save months and years of effort during the maintenance stage.

To sum up: every (software) solution can be simplified without perceptible loss of functionality. Simpler solutions are more reliable and easier to maintain. One ought to spend time upfront attempting such simplifications. Pretend that you are being charged per line of (pseudo)code, per use case to test (10x more) and per bug fixed (10x more still), then see if you can save some money before rushing to design and code (or, in this particular case, before posting it here for others to see).

Comment author: KPier 02 September 2012 07:03:25AM 6 points [-]

My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down.

It seems like it's your estimate of the programming knowledge of the commenters that should go down. Most of the proposed solutions have in common that they sound really simple to implement, but would in fact be complicated - which someone with high general intelligence and rationality, but limited domain-specific knowledge, might not know.

Should people who can't program refrain from suggesting programming fixes? Maybe. But maybe it's worth the time to reply to some of the highly-rated suggestions and explain why they're much harder than they look.

(I agree with your proposed solution to attempt simplifications.)

Comment author: gjm 01 September 2012 05:31:13PM 0 points [-]

Would any of the (at least four) people who have upvoted Eliezer's comment but not my response -- or Eliezer, if he happens still to be reading -- like to explain to me in what way Eliezer is right and I'm wrong here? Thanks!

Comment author: KPier 01 September 2012 05:43:04PM 2 points [-]

Generally speaking, there are fewer upvotes later in a thread, since fewer people read that far. If the children to your comment have more karma then your comment, it's reasonable to assume that people saw both comments and chose to up vote theirs, but if a parent to your comment has more karma, you can't really draw any inference from that at all.

View more: Prev | Next