Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Cyan 22 October 2014 03:50:15AM 4 points [-]

In the biz we call this selection bias. The most fun example of this is the tale of Abraham Wald and the Surviving Bombers.

Comment author: Cyan 19 October 2014 01:39:26AM 2 points [-]

I was working in protein structure prediction.

I confess to being a bit envious of this. My academic path after undergrad biochemistry took me elsewhere, alas.

In response to The Level Above Mine
Comment author: AshwinV 06 October 2014 03:41:44AM 0 points [-]

Can anyone tell me whether Jaynes' book can be read and understood without any particular formal training? I do know the basic concepts of probability, and I usually score around the 85th percentile on math tests... And how hard/time-consuming exactly will the book be? I am employed in a somewhat high pressure job on a full time basis...

Comment author: Cyan 06 October 2014 04:33:02AM *  6 points [-]

Try it -- the first three chapters are available online here. The first one is discursive and easy; the math of the second chapter is among of most difficult in the book and can be safely skimmed; if you can follow the third chapter (which is the first one to present extensive probability calculations per se) and you understand probability densities for continuous random variables then you'll be able to understand the rest of the book without formal training.

Comment author: EHeller 03 October 2014 05:11:55AM 3 points [-]

I don't know any LW-ers in person, but I'm sure that at least some people have benefited from reading the sequences.

You miss my meaning. The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

With regards to their core goal, the sequences matter if 1. they lead to people donating to MIRI 2. they lead to people working on friendly AI.

I view point 1 as advertising, and I think research papers are obviously better than the sequences for point 2.

Comment author: Cyan 03 October 2014 05:53:31AM *  2 points [-]

The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

Kinda... more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.

Comment author: ChristianKl 02 October 2014 03:41:02PM 1 point [-]

I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world.

How do you know?

Especially since falsely holding that belief would be an example.

Comment author: Cyan 02 October 2014 03:52:48PM *  1 point [-]

Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.

Comment author: KatjaGrace 02 October 2014 12:37:43AM 2 points [-]

Even if it is a huge leap to achieve that, until you run the computations, it is unclear to me how they could have contributed to that leap.

Comment author: Cyan 02 October 2014 01:51:36PM *  0 points [-]

But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.

Since we're just bouncing short comments off each other at this point, I'm going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:

Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. ...If this is an unusual situation however, it seems strange that the other most salient route to superintelligence - artificial intelligence designed by humans - is also often expected to involve a discontinuous jump in capability, but for entirely different reasons.

The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the "interpreter", physics, that realizes that abstract computation.

Comment author: KatjaGrace 01 October 2014 09:18:06PM 2 points [-]

The impact on exercise of intelligence doesn't seem to come until the ems are already discontinuously better (if I understand), so can't seem to explain the discontinuous progress.

Comment author: Cyan 01 October 2014 11:18:27PM 1 point [-]

Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities -- being able to run those computations in places pink goo can't go and at speeds pink goo can't manage is already a huge leap.

Comment author: Lumifer 01 October 2014 04:56:47PM 0 points [-]

You seem to be trying to suggest, through implication and leading questions, that using that additional information in making a judgment in this case is dangerous

No, I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.

given the data provided above I consider the shipowner negligent ... Do you disagree?

Keep in mind that this parable was written specifically to make you come to this conclusion :-)

But yes, I disagree. I consider the data above to be insufficient to come to any conclusions about negligence.

Comment author: Cyan 01 October 2014 08:00:14PM *  0 points [-]

I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.

Mental processes inside someone's mind actually happen in physical reality.

Just kidding; I know that's not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances, and yet the final outcome could be terrible. Conversely, that person might make decisions that that careful and deliberate consideration would judge to be terrible and foolish in prior expectation, and yet through uncontrollable happenstance the final outcome could be tolerable.

Comment author: KatjaGrace 01 October 2014 07:37:39PM 2 points [-]

I see. In that case, why would you expect applying intelligence to that problem to bring about a predictable discontinuity, but applying intelligence to other problems not to?

Comment author: Cyan 01 October 2014 07:46:21PM 2 points [-]

Because the solution has an immediate impact on the exercise of intelligence, I guess? I'm a little unclear on what other problems you have in mind.

Comment author: simplicio 01 October 2014 06:55:55PM 4 points [-]

completely ignoring the actual outcome seems iffy to me

That's because we live in a world where people's inner states are not apparent, perhaps not even to themselves. So we revert to (a) what would a reasonable person believe, (b) what actually happened. The latter is unfortunate in that it condemns many who are merely morally unlucky and acquits many who are merely morally lucky, but that's life. The actual bad outcomes serve as "blameable moments". What can I say - it's not great, but better than speculating on other people's psychological states.

In a world where mental states could be subpoenaed, Clifford would have both a correct and an actionable theory of the ethics of belief; as it is I think it correct but not entirely actionable.

I don't know what a "genuine extrapolated prior" is.

That which would be arrived at by a reasonable person (not necessarily a Bayesian calculator, but somebody not actually self-deceptive) updating on the same evidence.

A related issue is sincerity; Clifford says the shipowner is sincere in his beliefs, but I tend to think in such cases there is usually a belief/alief mismatch.

I love this passage from Clifford and I can't believe it wasn't posted here before. By the way, William James mounted a critique of Clifford's views in an address you can read here; I encourage you to do so as James presents some cases that are interesting to think about if you (like me) largely agree with Clifford.

Comment author: Cyan 01 October 2014 07:41:26PM 0 points [-]

That's because we live in a world where... it's not great, but better than speculating on other people's psychological states.

I wanted to put something like this idea into my own response to Lumifer, but I couldn't find the words. Thanks for expressing the idea so clearly and concisely.

View more: Next