Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: The_Jaded_One 16 January 2017 04:49:03PM *  4 points [-]

Maybe you're just not rational enough to be shown that content? I see like 10 posts there.

MIRI has invented a proprietary algorithm that uses the third derivative of your mouse cursor position and click speed to predict your calibration curve, IQ and whether you would one-box on Newcomb's problem with a correlation of 95%. LW mods have recently combined those into an overall rationality quotient which the site uses to decide what level of secret rationality knowledge you are permitted to see.

Maybe you should do some debiasing, practice being well-calibrated, read the sequences and try again later?

EDIT: Some people seem to be missing that this is intended as humor ............

Comment author: Manfred 16 January 2017 06:09:25PM 1 point [-]

it's a shame downvoting is temporarily disabled.

Comment author: Manfred 13 January 2017 10:02:29PM *  3 points [-]

Found some other interesting blog posts by him: 1 2.

Comment author: username2 13 January 2017 04:15:44PM 0 points [-]

This thread seems to not fit that pattern. The only annoying content is related to moderation.

Comment author: Manfred 13 January 2017 06:41:47PM *  3 points [-]

This thread doesn't fit that pattern largely because LW users are aware of the problems with talking about politics and are more likely to stay on the meta-level as a response to that. There is, in fact, not a single argument for/against brexit in this thread, which I think is a shining advertisement for LW comment culture. On the other hand, I think this article is also particularly well-suited for not immediately inspiring object-level argument, at least as long as it's not posted on /r/news or similar.

Comment author: gjm 09 January 2017 02:30:29PM *  2 points [-]

Zvavzvmvat |fva(a)| vf rdhvinyrag gb zvavzvmvat |a-s(a)| jurer s(a) vf gur arnerfg zhygvcyr bs cv gb a; rdhvinyragyl, gb zvavzvmvat |a-z.cv| jurer a,z ner vagrtref naq 1<=a<=10^100; rdhvinyragyl, gb zvavzvmvat z|a/z-cv| jvgu gur fnzr pbafgenvag ba a. (Juvpu vf boivbhfyl zber be yrff rdhvinyrag gb fbzrguvat yvxr z<=10^100/cv+1.)

Gurer'f n fgnaqneq nytbevguz sbe guvf, juvpu lbh pna svaq qrfpevorq r.t. urer. V guvax gur erfhyg unf gur sbyybjvat qvtvgf:

bar fvk frira mreb svir gjb frira svir avar fvk guerr svir bar svir fvk svir bar svir fvk frira svir sbhe svir avar rvtug svir avar fvk bar bar mreb frira sbhe svir fvk fvk sbhe avar svir svir svir mreb frira guerr fvk gjb guerr bar guerr avar guerr rvtug bar rvtug rvtug rvtug svir avar mreb gjb frira bar mreb fvk gjb avar guerr frira rvtug sbhe svir gjb mreb avar svir gjb gjb avar svir mreb frira gjb sbhe mreb mreb rvtug gjb frira fvk bar frira svir fvk avar sbhe sbhe sbhe mreb fvk guerr

Comment author: Manfred 11 January 2017 05:38:57AM *  0 points [-]

I wonder if there's a simple worst-case proof that shows how complicated you need to let the seeds get in order to find the actual optimum. For example, if we look for the best integer under 10^85 rather than under 10^100, the seed that leads to this algorithm outputting the optimum is different, or at least the overlap seems small. But I'm having a hard time proving anything about this algorithm, because although small seed numerators could add up to almost anything, in practice they won't.

Comment author: Thomas 10 January 2017 08:09:10AM *  0 points [-]

Say its (decimal) name. Say it!

Comment author: Manfred 10 January 2017 07:58:43PM *  0 points [-]
Comment author: Thomas 09 January 2017 10:39:17AM 2 points [-]
Comment author: Manfred 09 January 2017 11:34:09PM 0 points [-]

guerr fvkgl!

fbeel.

Comment author: morganism 09 January 2017 10:04:24PM 1 point [-]

I know we had some discussion of "real names" here a few weeks ago, here is an overview of the recent, relevant study on that, by the Coral Project.

"People often say that online behavior would improve if every comment system forced people to use their real names. It sounds like it should be true – surely nobody would say mean things if they faced consequences for their actions?

Yet the balance of experimental evidence over the past thirty years suggests that this is not the case. Not only would removing anonymity fail to consistently improve online community behavior – forcing real names in online communities could also increase discrimination and worsen harassment.

"Conflict, harassment, and discrimination are social and cultural problems, not just online community problems. In societies including the US where violence and mistreatment of women, people of color, and marginalized people is common, we can expect similar problems in people’s digital interactions [1]. Lab and field experiments continue to show the role that social norms play in shaping individual behavior; if the norms favor harassment and conflict, people will be more likely to follow. While most research and design focuses on changing the behavior of individuals, we may achieve better results by focusing on changing climates of conflict and prejudice"

https://blog.coralproject.net/the-real-name-fallacy/

Comment author: Manfred 09 January 2017 11:21:24PM 0 points [-]

It feels like by attempting to drag in data from outside the scope of forums or comments sections ("half of people harassed via the internet knew their attacker," etc.), this article has become useless to forums and comments sections.

Comment author: bogus 09 January 2017 06:55:51PM 1 point [-]

I have kept some chickens. They do not have a rich internal life.

Erm, some people would disagree with that.

Comment author: Manfred 09 January 2017 11:01:01PM *  0 points [-]

Nice review article!

I think one can accept all the direct factual claims of the paper while having a fairly different interpretation. I certainly agree with the author that chickens can watch someone put food somewhere out of sight, and then have a model of the world that involves a reward if they go towards the place the food went. And that they can be trained to do a few tasks via simple reinforcement learning, and have pretty good sensory processing. This is cognitive behavior much more impressive than that of the bonsai tree, which has much more rudimentary reinforcement learning and can only be trained to do tasks on longer timescales, and has no concept of object permanence at all. I just don't think it's very likely that chickens have introspection, deep understanding of the environment, the ability to make novel multi-step plans, or even particularly good episodic memory.

Comment author: Manfred 09 January 2017 06:23:05PM 2 points [-]

This is odd. I have kept some chickens. They do not have a rich internal life. Perhaps if you actually raised a chicken, you would not in fact personify it, but instead would see how simple its programming is, and not think it worthy of moral consideration.

After all, if stipulated willingness to get attached to a hypothetical pet is your guide to moral consideration, perhaps if I suggest getting you a bonsai tree you'll no longer use paper.

I mean, I agree that vegetarianism is, and should be, a largely aesthetic choice given our current options, if only at the meta-level of attempting to follow some kind of psychologically-unrealistic utilitarianism for aesthetic reasons. I just think your aesthetics are weird :P

Comment author: Manfred 05 January 2017 01:39:13AM 4 points [-]

"Proper scoring rule" just means that you attain the best score by giving the most accurate probabilities you can. In that sense, any concave proper scoring rule will give you a good feedback mechanism. The reason people like log scoring rule is because it corresponds to information (the kind you can measure in bits and bytes), and so a given amount of score increase has some meaning in terms of you using your information better.

The information measured by your log score is identical to Shannon's idea of information carried by digital signals. When a binary event is completely unknown to you, you can gain 1 bit of information by learning about it. For events that you can predict to high accuracy, the entropy of the event (according to your distribution) is lower, and you gain less information by learning the result. In fact, if you look at the expected score, it goes to zero as the event becomes more and more predictable (though you're still incentivized to answer correctly).

But I think this leaves out something interesting that I don't have a good answer for, which that this straightforward interpretation only works when you, the human, don't screw up. When you do screw up, I'm not sure there's a clear interpretation of score.

View more: Next