Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: knb 22 July 2014 06:59:26PM 0 points [-]

Or maybe Europe finally learned the lessons of 1914 (i.e. not to start an apocalyptic war over relatively trivial matters.)

Comment author: V_V 23 July 2014 09:35:13AM 0 points [-]

Europe could either side with Ukraine and boycott the Russian natural gas, at a huge cost, or side with Russia and force Ukraine into submission by political and economic isolation, effectively rewarding the Russian expansionist attitudes.
Looks like a catch-22 scenario.

Or Europe could just do nothing, except maybe avoiding to fly its planes on top of the war zone, which is pretty much what is actually happening now.

It doesn't look like there is an easy solution to this problem.
After all, if politics was easy it wouldn't be politics.

In response to comment by V_V on Too good to be true
Comment author: Douglas_Knight 14 July 2014 02:32:00PM 2 points [-]

Quoting authorities without further commentary is a dick thing to do. I am going to spend more words speculating about the intention of the quote than are in the quote, let alone that you bothered to type.

I have no idea what you think is relevant about that passage. It says exactly what I said, except transformed from the effect size scale to the p-value scale. But somehow I doubt that's why you posted it. The most common problem in the comments on this thread is that people confuse false positive rate with false negative rate, so my best guess is that you are making that mistake and thinking the passage supports that error (though I have no idea why you're telling me). Another possibility, slightly more relevant to this subthread, is that you're pointing out that some people use other p-values. But in medicine, they don't. They almost always use 95%, though sometimes 90%.

Comment author: V_V 20 July 2014 03:37:02PM 0 points [-]

My confusion is about "at least" vs. "exactly". See my answer to Cyan.

In response to comment by V_V on Too good to be true
Comment author: Cyan 14 July 2014 02:23:29PM 1 point [-]

You want size, not p-value. The difference is that size is a "pre-data" (or "design") quantity, while the p-value is post-data, i.e., data-dependent.

In response to comment by Cyan on Too good to be true
Comment author: V_V 20 July 2014 03:36:19PM 1 point [-]

Thanks.

So if I set size at 5%, collect the data, and run the test, and repeat the whole experiment with fresh data multiple times, should I expect that, if the null hypothesis is true, the test accepts exactly %5 of times, or at most 5% of times?

Comment author: David_Gerard 18 July 2014 10:46:08PM 4 points [-]

I got email from basilisk victims, as noted elsewhere in this thread (this is why I created the RW article, 'cos individual email doesn't scale).

Comment author: V_V 19 July 2014 10:43:35AM 2 points [-]

Point taken.

Comment author: Emile 18 July 2014 07:30:14AM 14 points [-]

Out of curiosity, what is the fraction of LW posters that believes this is a genuine risk?

Vanishingly small - the post was deleted by Eliezer (was that what, a year ago? two?) because it gave some people he knew nightmares, but I don't remember anybody actually complaining about it. Most of the ensuing drama was about whether Eliezer was right in deleting it. The whole thing has been a waste of everybody's time and attention (as community drama over moderation almost always is).

Comment author: V_V 18 July 2014 09:32:05PM *  -1 points [-]

Vanishingly small - the post was deleted by Eliezer (was that what, a year ago? two?) because it gave some people he knew nightmares

Given that nobody else ever complained, AFAIK, it seem that he was the only person troubled by that post.

EDIT: not.

Comment author: Kaj_Sotala 16 July 2014 08:32:17AM *  5 points [-]

In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I'm still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.

http://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/ :

We’ve released a new paper recently accepted to the MIPC workshop at AAAI-14: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem” by LaVictoire et al.

http://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/ :

We’ve released a new working paper by Benja Fallenstein and Nate Soares, “Problems of self-reference in self-improving space-time embedded intelligence.” [...]

Update 05/14/14: This paper has been accepted to AGI-14.

Comment author: V_V 16 July 2014 08:48:53AM 4 points [-]

Didn't know about that. Thanks for the update.

Comment author: paper-machine 14 July 2014 12:31:48PM 6 points [-]

We only really agree on the first point. I'm skeptical of CFAR and the ritual crew but don't find these supposed comparisons to be particularly apt.

I've watched MIRI improve their research program dramatically over the past four four years, and expect it to improve. Yes, obviously they had some growing pains in learning how to publish, but everyone who tries to do publishable work goes through that phase (myself included).

I'm not on board with the fifth point:

cryonics (you are signed up, so you don't probably agree)

Well, 27.5% have a favorable opinion. The prior for it actually working seems optimistic but not overly so ("P(Cryonics): 22.8 + 28 (2, 10, 33) [n = 1500]"). At the least I'd say it's a controversial topic here, for all the usual reasons. (No, I'm not signed up for cryonics. No, I don't think it's very likely to work.)

paleo diets/ketogenic diets

Most of the comments on What is the evidence in favor of paleo? are skeptical. The comment with highest karma is very skeptical. Lukeprog said he's skeptical and EY said it didn't work for him.

armchair evopsych

Not really sure what you're referring to.

Surprised you didn't bring up MWI; that's the usual hobby horse for this kind of criticism.

Comment author: V_V 14 July 2014 01:58:04PM -1 points [-]

We only really agree on the first point. I'm skeptical of CFAR and the ritual crew but don't find these supposed comparisons to be particularly apt.

Ok.

I've watched MIRI improve their research program dramatically over the past four four years, and expect it to improve.

I agree that it improved dramatically, but only because the starting point was so low.
In recent years they released some very technical results. I think that some are probably wrong or trivial while others are probably correct and interesting, but I don't have the expertise to properly evaluate them, and this probably applies to most other people as well, which is why I think MIRI should seek peer-review by independent experts.

Well, 27.5% have a favorable opinion. The prior for it actually working seems optimistic but not overly so ("P(Cryonics): 22.8 + 28 (2, 10, 33) [n = 1500]"). At the least I'd say it's a controversial topic here, for all the usual reasons. (No, I'm not signed up for cryonics. No, I don't think it's very likely to work.)

As I said, these beliefs aren't necessarily held by a majority of lesswrongers, but are unusually common.

Surprised you didn't bring up MWI; that's the usual hobby horse for this kind of criticism.

MWI isn't pseudo-scientific per se. However, the claim that MWI is obviously true and whoever thinks otherwise must be ignorant or irrational is.

Comment author: ChristianKl 14 July 2014 01:09:27PM 1 point [-]

It seems to me that as long as something is dressed in a sufficiently "sciency" language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it.

What exactly do you mean with buying into it? I think there are places on the internet with a lot more armchair evopsych than LW.

Rituals: Deliberated modelled after religious rituals, including "public confession" sessions

Could you provide a link? I'm not aware of that ritual in LW if you mean something more than encouraging people to admit when they are wrong.

Comment author: V_V 14 July 2014 01:41:51PM 1 point [-]

What exactly do you mean with buying into it? I think there are places on the internet with a lot more armchair evopsych than LW.

Sure, but I'd expect that a community devoted to "refining the art of human rationality" would be more skeptical of that type of claims.

Anyway, I'm not saying that LessWrong is a terribly diseased community. If I thought it was, I wouldn't be hanging around here. I was just expressing my concerns about some aspects of the local culture.

Could you provide a link? I'm not aware of that ritual in LW if you mean something more than encouraging people to admit when they are wrong.

https://www.google.com/search?q=less+wrong+ritual&ie=utf-8&oe=utf-8#channel=fs&q=ritual+report+site:lesswrong.com

http://lesswrong.com/lw/9aw/designing_ritual/

And in particular the "Schelling Day", which bothers me the most: http://lesswrong.com/lw/h2t/schelling_day_a_rationalist_holiday/

In response to comment by V_V on Too good to be true
Comment author: Douglas_Knight 12 July 2014 09:21:48PM 3 points [-]

No, we are choosing the effect size before we do the study. We choose it so that if the true effect is zero, we will have a false positive exactly 5% of the time.

Comment author: V_V 14 July 2014 10:46:51AM 1 point [-]

According to Wikipedia:

In statistical significance testing, the p-value is the probability of obtaining a test statistic result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.[1][2] A researcher will often "reject the null hypothesis" when the p-value turns out to be less than a predetermined significance level, often 0.05[3][4] or 0.01.

Comment author: V_V 14 July 2014 10:11:13AM *  2 points [-]

I read LessWrong primarily for entertainment value, but I share your concerns about some aspects of the surrounding culture, although in fairness it seems to have got better in recent years (at least as far as it is apparent from the online forum. I don't know about live events).
Specifically my points of concern are:

  • The "rationalist" identity: It creates the illusion that by identifying as a "rationalist" and displaying the correct tribal insignia you are automatically more rational, or at least "less wrong" than the outsiders.

  • Rituals: Deliberated modelled after religious rituals, including "public confession" sessions AFAIK similar to those performed by cults like the Church of Synanon.

  • MIRI: I agree with you that they probably exaggerate the AI risk, and I doubt they have the competence to do much about it anyway. For its first ten or so years, when manned primarily by Eliezer Yudkowsky, Anna Salamon, etc., the organization produced effectively zero valuable research output. In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I'm still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.

  • CFAR: a self-help/personal-development program. Questionable like all the self-help/personal-development programs in existence. If I understand correctly, CFAR is modelled after, or at least is similar to, Landmark, a controversial organization.

  • Pseudo-scientific beliefs and practices: cryonics (you are signed up, so you don't probably agree), paleo diets/ketogenic diets, armchair evopsych, and so on. It seems to me that as long as something is dressed in a sufficiently "sciency" language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it. Yes, this kind of effects happen in all groups, but from a group of people with average IQ 140 who pride in pursuing rationality I would have expected better.

View more: Next