Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: XFrequentist 30 October 2014 04:15:54AM 1 point [-]

Ooh ooh, do mine!

Comment author: Cyan 30 October 2014 12:36:35PM *  1 point [-]

Same special-snowflake level credible limits, but for different reasons. Swimmer963 has an innate drive to seek out and destroy (whatever she judges to be) her personal inadequacies. She wasn't very strategic about it in teenager-hood, but now she has the tools to wield it like a scalpel in the hands of a skilled surgeon. Since she seems to have decided that a standard NPC job is not for her, I predict she'll become a PC shortly.

You're already a PC; your strengths are a refusal to tolerate mediocrity in the long-term (or let us say, in the "indefinite" term, in multiple senses) and your vision for controlling and eradicating disease.

Comment author: Cyan 30 October 2014 01:57:25AM 1 point [-]

FWIW, in my estimation your special-snowflake-nature is somewhere between "more than slightly, less than somewhat" and "potential world-beater". Those are wide limits, but they exclude zero.

Comment author: CronoDAS 28 October 2014 10:08:29AM 2 points [-]

I don't want to embarrass my girlfriend.

Comment author: Cyan 29 October 2014 06:25:12AM *  2 points [-]

Hikikomori no more? If so (as seems likely what with the girlfriend and all), it gladdens me to hear it.

Comment author: Cyan 22 October 2014 03:50:15AM 4 points [-]

In the biz we call this selection bias. The most fun example of this is the tale of Abraham Wald and the Surviving Bombers.

Comment author: Cyan 19 October 2014 01:39:26AM 2 points [-]

I was working in protein structure prediction.

I confess to being a bit envious of this. My academic path after undergrad biochemistry took me elsewhere, alas.

In response to The Level Above Mine
Comment author: AshwinV 06 October 2014 03:41:44AM 0 points [-]

Can anyone tell me whether Jaynes' book can be read and understood without any particular formal training? I do know the basic concepts of probability, and I usually score around the 85th percentile on math tests... And how hard/time-consuming exactly will the book be? I am employed in a somewhat high pressure job on a full time basis...

Comment author: Cyan 06 October 2014 04:33:02AM *  6 points [-]

Try it -- the first three chapters are available online here. The first one is discursive and easy; the math of the second chapter is among of most difficult in the book and can be safely skimmed; if you can follow the third chapter (which is the first one to present extensive probability calculations per se) and you understand probability densities for continuous random variables then you'll be able to understand the rest of the book without formal training.

Comment author: EHeller 03 October 2014 05:11:55AM 3 points [-]

I don't know any LW-ers in person, but I'm sure that at least some people have benefited from reading the sequences.

You miss my meaning. The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

With regards to their core goal, the sequences matter if 1. they lead to people donating to MIRI 2. they lead to people working on friendly AI.

I view point 1 as advertising, and I think research papers are obviously better than the sequences for point 2.

Comment author: Cyan 03 October 2014 05:53:31AM *  2 points [-]

The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

Kinda... more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.

Comment author: ChristianKl 02 October 2014 03:41:02PM 1 point [-]

I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world.

How do you know?

Especially since falsely holding that belief would be an example.

Comment author: Cyan 02 October 2014 03:52:48PM *  1 point [-]

Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.

Comment author: KatjaGrace 02 October 2014 12:37:43AM 2 points [-]

Even if it is a huge leap to achieve that, until you run the computations, it is unclear to me how they could have contributed to that leap.

Comment author: Cyan 02 October 2014 01:51:36PM *  0 points [-]

But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.

Since we're just bouncing short comments off each other at this point, I'm going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:

Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. ...If this is an unusual situation however, it seems strange that the other most salient route to superintelligence - artificial intelligence designed by humans - is also often expected to involve a discontinuous jump in capability, but for entirely different reasons.

The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the "interpreter", physics, that realizes that abstract computation.

Comment author: KatjaGrace 01 October 2014 09:18:06PM 2 points [-]

The impact on exercise of intelligence doesn't seem to come until the ems are already discontinuously better (if I understand), so can't seem to explain the discontinuous progress.

Comment author: Cyan 01 October 2014 11:18:27PM 1 point [-]

Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities -- being able to run those computations in places pink goo can't go and at speeds pink goo can't manage is already a huge leap.

View more: Next