In the biz we call this selection bias. The most fun example of this is the tale of Abraham Wald and the Surviving Bombers.
I was working in protein structure prediction.
I confess to being a bit envious of this. My academic path after undergrad biochemistry took me elsewhere, alas.
Can anyone tell me whether Jaynes' book can be read and understood without any particular formal training? I do know the basic concepts of probability, and I usually score around the 85th percentile on math tests... And how hard/time-consuming exactly will the book be? I am employed in a somewhat high pressure job on a full time basis...
Try it -- the first three chapters are available online here. The first one is discursive and easy; the math of the second chapter is among of most difficult in the book and can be safely skimmed; if you can follow the third chapter (which is the first one to present extensive probability calculations per se) and you understand probability densities for continuous random variables then you'll be able to understand the rest of the book without formal training.
I don't know any LW-ers in person, but I'm sure that at least some people have benefited from reading the sequences.
You miss my meaning. The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.
With regards to their core goal, the sequences matter if 1. they lead to people donating to MIRI 2. they lead to people working on friendly AI.
I view point 1 as advertising, and I think research papers are obviously better than the sequences for point 2.
The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.
Kinda... more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.
I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world.
How do you know?
Especially since falsely holding that belief would be an example.
Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.
Even if it is a huge leap to achieve that, until you run the computations, it is unclear to me how they could have contributed to that leap.
But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.
Since we're just bouncing short comments off each other at this point, I'm going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:
Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. ...If this is an unusual situation however, it seems strange that the other most salient route to superintelligence - artificial intelligence designed by humans - is also often expected to involve a discontinuous jump in capability, but for entirely different reasons.
The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the "interpreter", physics, that realizes that abstract computation.
The impact on exercise of intelligence doesn't seem to come until the ems are already discontinuously better (if I understand), so can't seem to explain the discontinuous progress.
Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities -- being able to run those computations in places pink goo can't go and at speeds pink goo can't manage is already a huge leap.
You seem to be trying to suggest, through implication and leading questions, that using that additional information in making a judgment in this case is dangerous
No, I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.
given the data provided above I consider the shipowner negligent ... Do you disagree?
Keep in mind that this parable was written specifically to make you come to this conclusion :-)
But yes, I disagree. I consider the data above to be insufficient to come to any conclusions about negligence.
I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.
Mental processes inside someone's mind actually happen in physical reality.
Just kidding; I know that's not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances, and yet the final outcome could be terrible. Conversely, that person might make decisions that that careful and deliberate consideration would judge to be terrible and foolish in prior expectation, and yet through uncontrollable happenstance the final outcome could be tolerable.
I see. In that case, why would you expect applying intelligence to that problem to bring about a predictable discontinuity, but applying intelligence to other problems not to?
Because the solution has an immediate impact on the exercise of intelligence, I guess? I'm a little unclear on what other problems you have in mind.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I don't want to embarrass my girlfriend.
Hikikomori no more? If so (as seems likely what with the girlfriend and all), it gladdens me to hear it.