Is there a specific bias for thinking that everyone possesses the same knowledge as you? For example, after learning more about a certain subject, I have a tendency to think, "Oh, but everyone already knows this, don't they" even though they probably don't and I wouldn't have assumed that before learning about it myself.
A related concept is "inferential distance" - people can only move one step at a time from what they know.
Also typical mind fallacy.
LessWrong is no longer even large or active enough for downvoting to be necessary. The activity of posts here is similar to Usenet, which had no moderation.
Usenet, which had no moderation
Some newsgroups were / are moderated. Most of the sci.* soc.*
Also newsreaders had killfiles so you could ignore people.
Same problems for decades: 4chan -> moderation -> echo-chamber
The Economist reported on the Israeli study too:
http://www.economist.com/node/18557594
The article makes an argument which I find persuasive: that it's not about food as much as it's about difficult decisions tiring the brain. When the brain is tired, it resorts to the easy and safe option.
Check out the Economist article for more.
... and these decisions are difficult. You have very little, poor quality information, you are constantly lied to, you get very little feedback on how your decisions went, and any feedback you do get is delayed and noisy.
The crazier, more-expensive, and more-difficult the method is, the more improvement it should show; craziness should filter out less-committed parents.
Montessori
Your main point may well be valid; I think it probably is. But my daughter attended a Montessori kindergarten (but not a Montessori school) and I have read Maria Montessori's book. Neither seemed at all crazy to me.
The Montessori method is to engage children in activities which are challenging but not discouragingly so. Each activity produces a small increment in a skills. The children seem to become absorbed in the activities and find them very rewarding. In the adult world this would probably be something like "deliberate practice".
This idea of learning skills in small increments - in the sweet spot between "too easy and you learn nothing" and "too hard so you learn nothing and get discouraged" has wide applicability to children and adults. For example after almost a year of conventional swimming lessons and my daughter could not swim, I tried applying this method to swimming.
Swimming of course requires you to do several things at once. If you don't do them all you get a mouth full of water and learn very little.
I bought her a buoyancy vest and fins. She learned to swim with these very quickly. After a while we deflated the vest progressively and she again learned to swim that way, being now responsible for staying afloat. Then we took away the fins and she mastered that quickly. After a few lessons she was a confident swimmer. This was a very dramatic result. Back at the swim school they were surprised she could now swim, but were totally uninterested in how we achieved this.
The Montessori children seem to end up with excellent powers of concentration; that is certainly the case with my daughter. I did hear of a study that found that this was the most prominent effect of the Montessori schools. I would suggest they are worth looking at, but I would check that they are actually following the method.
I do care about how one becomes a successful solo practitioner in the field of California consumer law, but they don't exactly have databases about that.
Here's what I do:
Look at the job type you want. Look at professional websites of the people who have the jobs you want. Look at their CV's and see how they got there. If any of that info is expressible in a quantitative form (e.g. percent who went to top ten law schools) tabulate that.
You might notice "Oh, wait, most people who have the job I want have background X that I don't have!" (Different college major or whatever.) That might be evidence that you can't get that job; but before you start worrying, send someone an email and ask "How likely is it for me to get a job like yours with background Y instead of X?" It may be that your background is unusual but not a handicap.
Is it less rigorous than a scientific study? You bet. Is it better than nothing? Much.
If you have access to the attention of lots of professionals, a homemade poll can be very illuminating even if it's informal. For example, this survey about how novelists get published is more informative than most "how to be a writer" advice out there.
see how they got there
It would also be worthwhile to look at people who did those things and see how they ended up i.e., look from the other side.
For example if you look at rock musicians you are likely to find they neglected their studies and focused entirely on their music. But this seems to be a strategy with a pretty low expectation and very high variance in outcomes.
Actually looking at data is simple, easy, and straightforward, and yet almost no one actually does it. Here's another one: Adjusted for inflation, what is the average, long-term appreciation of the stock market? Here's the historical Dow Jones index, and here's an inflation calculator. Try it and see!
Be careful here:
The DJIA index excludes dividends, which historically produce about half of the total return.
The US market was the top performing stock market of the 1900s. So you are looking at a very unrepresentative sample of 1.
Several markets went to zero during C20 e.g. Russia post revolution.
The market return ignores buy/sell costs and slippage, and taxes. Even buying the index necessarily involves some buying and selling as stocks come and go.
All in all, the ~10% gross nominal return turns into ~3-4% after taking off CPI, using the average market not the top performing one, and taking into account costs and slippage.
In 1900 the stock market was rightly regarded as a crooked casino, so sensible people invested in bonds. So there is potentially a hindsight bias here too - why are we not looking at the investment that was most in favor at the time?
DIY science is harder than it looks. But still worthwhile if you do the homework.
This is a very important concern that I have too. I have not read the book, and it might be a very interesting read, but when it starts with:
No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before.
It concerns me. Because business is already full of dubious metrics that actually do harm. For instance, in programming, source lines of code (SLOC) per month is one metric that is used to gauge 'programmer productivity', but has come under extreme and rightful skepticism.
Scientific methods are powerful when used properly, but a little knowledge can be a dangerous thing.
Yes he is all over this.
In the TQM world this comes under the heading "any metric used to reward people will become corrupt". H Edwards Deming was writing about this issue in the 1960s or earlier. For this reason he advocated separating data collection used to run the business from data collection used to reward people. Too often, people decide this is "inefficient" and combine the two, with predictable results. Crime statistics in the US is one terrible example of this.
From my recollection of the book I think he would say that SLOC is not actually a terrible metric and can be quite useful. I personally use it myself on my own projects - but I have no incentive to game the system. If you start paying people for SLOC you are going to get a lot of SLOCs!
Because of the history, you need to go overboard to reassure people you will not use metrics against them. They are going to assume you will use them against them, until proven otherwise.
No, no, no. Three problems, one in the analogy and two in the probabilities.
First, an individual particle can briefly exceeed the speed of light; the *group* velocity cannot. Go read up on Cerenkov radiation: It's the blue glow created by (IIRC) neutrons briefly breaking through c, then slowing down. The decrease in energy registers as emitted blue light.
Second: conditional probabilities are not necessarily given by a ratio of densities. You're conditioning on (or working with) events of measure-zero. These puzzlers are why measure theory exists -- to step around the seeming 'inconsistencies'.
Third: The probability of a probability is superfluous. Probabilities are (thanks to Kolmogorov) just the expectation of indicator variables. Thus P(P(*)=1) = E(I(E(I(*))=1)) = 0 or 1; the randomness is all eliminated by the inside expectation.
Leave the musings on probabilities to the statisticians; they've already thought about these supposed paradoxes.
Cerenkov radiation: It's the blue glow created by (IIRC) neutrons briefly breaking through c
I thought it was due to neutrons exceeding the phase velocity of light in a medium, which is invariably slower than c. The neutron is still going slower than c:
Wikipedia
While electrodynamics holds that the speed of light in a vacuum is a universal constant (c), the speed at which light propagates in a material may be significantly less than c. For example, the speed of the propagation of light in water is only 0.75c. Matter can be accelerated beyond this speed (although still to less than c) during nuclear reactions and in particle accelerators. Cherenkov radiation results when a charged particle, most commonly an electron, travels through a dielectric (electrically polarizable) medium with a speed greater than that at which light propagates in the same medium.
What the heck, Humans who lived past infancy lived far longer than 16 years in the Ancestral environment - just very poor infant mortality brought down the average life expectancy.
"The typical human tribe" would not have gone around murdering whole other tribes... there is no evidence for that and that is not what modern isolated hunter gatherers do either.
. there is no evidence for that and that is not what modern isolated hunter gatherers do either.
I came across plenty of examples in my studies of anthropology. Of course it depends what you mean by "tribe". Really large scale violence requires a certain amount of technology. As an example"Yanomamo: The Fierce People" by Chagnon details some such incidents and suggests they were not unusual. Well actually the men and children were killed, the nubile women were kept alive for <reasons>.
See also the Torah / Old Testament for numerous genocides, though these were bronze/iron age people and also the historicity of the incidents is disputed.
This was not universal - the Kalahari Bushmen (now called the San people) did not do this, perhaps in part because their main weapon was a slow acting poison dart. An all-out war would kill everyone involved.
But rates of violent death among males were extremely high in hunter/gatherer societies often documented by early anthropologists (from reconstructing family trees) in the 30-50% range.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."
The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.
Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.
Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:
"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."
Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)
Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.
One perhaps useful analogy for super-intelligence going wrong is corporations.
We create corporations to serve our ends. They can do things we cannot do as individuals. But in subtle and not-so-subtle ways corporations can behave in very destructive ways. One example might be the way that they pursue profit at the cost of in some cases ruining people's lives, damaging the environment, corrupting the political process.
By analogy it seems plausible that super-intelligences may behave in a way that is against our interests.
It is not valid to assume that a super-intelligence will be smart enough to discern true human interests, or that it will be motivated to act on this knowledge.