Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Cyan 10 November 2014 06:55:34PM 0 points [-]

Embarrassingly, I didn't have the "who feeds Paris" realization until last year -- well after I thought I had achieved a correct understanding of and appreciation for basic microeconomic thought.

Comment author: FullMeta_Rationalist 02 November 2014 01:12:24AM *  26 points [-]

FINISHED. ALL OF IT. \m/ Literally superhuman.

TIL I'm undifferentiated according to the BSRI... huh.

Karma for all, per tradition. <3

- a long time lurker

P.S. You can trashcan the premature submission that answers Part 8's first question with 23200. While revising my predicted date of the singularity, I brushed my keypad's enter (next to the 3) by mistake. ಠ_ಠ

Comment author: Cyan 02 November 2014 02:36:43AM 8 points [-]

Nice choice of username. :-)

Comment author: XFrequentist 30 October 2014 04:15:54AM 1 point [-]

Ooh ooh, do mine!

Comment author: Cyan 30 October 2014 12:36:35PM *  1 point [-]

Same special-snowflake level credible limits, but for different reasons. Swimmer963 has an innate drive to seek out and destroy (whatever she judges to be) her personal inadequacies. She wasn't very strategic about it in teenager-hood, but now she has the tools to wield it like a scalpel in the hands of a skilled surgeon. Since she seems to have decided that a standard NPC job is not for her, I predict she'll become a PC shortly.

You're already a PC; your strengths are a refusal to tolerate mediocrity in the long-term (or let us say, in the "indefinite" term, in multiple senses) and your vision for controlling and eradicating disease.

Comment author: Cyan 30 October 2014 01:57:25AM 1 point [-]

FWIW, in my estimation your special-snowflake-nature is somewhere between "more than slightly, less than somewhat" and "potential world-beater". Those are wide limits, but they exclude zero.

Comment author: CronoDAS 28 October 2014 10:08:29AM 2 points [-]

I don't want to embarrass my girlfriend.

Comment author: Cyan 29 October 2014 06:25:12AM *  2 points [-]

Hikikomori no more? If so (as seems likely what with the girlfriend and all), it gladdens me to hear it.

Comment author: Cyan 22 October 2014 03:50:15AM 4 points [-]

In the biz we call this selection bias. The most fun example of this is the tale of Abraham Wald and the Surviving Bombers.

Comment author: Cyan 19 October 2014 01:39:26AM 2 points [-]

I was working in protein structure prediction.

I confess to being a bit envious of this. My academic path after undergrad biochemistry took me elsewhere, alas.

In response to The Level Above Mine
Comment author: AshwinV 06 October 2014 03:41:44AM 0 points [-]

Can anyone tell me whether Jaynes' book can be read and understood without any particular formal training? I do know the basic concepts of probability, and I usually score around the 85th percentile on math tests... And how hard/time-consuming exactly will the book be? I am employed in a somewhat high pressure job on a full time basis...

Comment author: Cyan 06 October 2014 04:33:02AM *  6 points [-]

Try it -- the first three chapters are available online here. The first one is discursive and easy; the math of the second chapter is among of most difficult in the book and can be safely skimmed; if you can follow the third chapter (which is the first one to present extensive probability calculations per se) and you understand probability densities for continuous random variables then you'll be able to understand the rest of the book without formal training.

Comment author: EHeller 03 October 2014 05:11:55AM 3 points [-]

I don't know any LW-ers in person, but I'm sure that at least some people have benefited from reading the sequences.

You miss my meaning. The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

With regards to their core goal, the sequences matter if 1. they lead to people donating to MIRI 2. they lead to people working on friendly AI.

I view point 1 as advertising, and I think research papers are obviously better than the sequences for point 2.

Comment author: Cyan 03 October 2014 05:53:31AM *  2 points [-]

The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

Kinda... more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.

Comment author: ChristianKl 02 October 2014 03:41:02PM 1 point [-]

I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world.

How do you know?

Especially since falsely holding that belief would be an example.

Comment author: Cyan 02 October 2014 03:52:48PM *  1 point [-]

Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.

View more: Next