the running 11 year average of global temperature has not flattened since 1990, but continued upward at almost the same pace with only a moderate decrease in slope since the outlier 1998 year. The 11 years 2000-2010 global mean temperature is significantly higher than the 10 years 1990-2000.
That is not "flat since the 90s". The only way to get "flat since the 90s" is to compare 1998 to various more recent years noting that it was nearly as hot as 2005 and 2010 etc. and slightly hotter than other years in the 2000s, as if 1 year ...
Don't worry, I just did reread it, and it is just as I remembered. A lot of applause lights for the crowd that believes that the current state of climate science is driven by funding pressure from the US government DoE. His "argument" is based almost exclusively on the tone of popular texts, and anecdotal evidence that Joe Romm was an asshole and pushing bad policy at DoE during the Clinton administration. Considerations of what happened during the 8 years of a GWB administration that was actively hostile to the people JoeR favored are ignore...
Taken.
As last year, I would prefer different wording on the P(religion) question. "More or less" is so vague as to allow for a lot of very different answers depending on how I interpret it, and I didn't even properly consider the "revealed" distinction noted in a comment here.
I appreciate the update on the singularity estimate for those of us whose P(singularity) is between epsilon and 50+epsilon.
I still wonder if we can tease out the differences between current logistical/political problems and the actual effectiveness of the science ...
I am a massive N on the meyers briggs astrology test, yes I scored 96% for openness on the big-5.
I suspect our responses to questions like "I am an original thinker" have a lot to do with our social context. Right now, the people I run into day to day are fairly representative of the general population with little to skew toward toward the intellectual or original other than "people who hold down decent jobs, or did so until they retired". It doesn't take a great lack of humility to realize that compared to most of these people, I am...
You say that "There will never be any such thing", but your reasons tell only why the problem is hard and much harder than one might think at first, not why it is impossible. Surely the kind of tech needed for self-driving cars, perhaps an order of magnitude more complicated, would make it possible to have safe, convenient, cheap flying cars or their functional equivalent.
At worst, the reasons you state would make it AI-complete, and even that seems unreasonably pessimistic.
It's only a crazy thing to do if you are pretty sure you will need/want the insurance for the rest of your life. If you aren't sure, then you are paying a bunch of your investment money for insurance you might decide you don't need (and in fact, you definitely won't need financially once you have self-funded).
If you are convinced that cryonics is a good investment, and don't have the money to fund it out of current capital, then that seems like a good reason to buy some kind of life insurance, and a universal life policy is probably one of the better ways...
" It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked."
Generally it's a good idea to think twice and reread before assuming that a published and frequently cited paper is saying something so obviously stupid.
Your edit doesn't help much at all. You talk about what others "seem to claim", but the argument that you have cla...
Not even close. The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.
The conclusion of what Partfit actually demonstrated goes something more like this:
For any coherent mathematical definition of utility such that there is some additive functions which allows you to sum the utility of many people to determine U(population), the following paradox exists:
Given any world with positive utility A, there exists at least one other world B with more people, and less ...
My understanding is that the "appeal to authority fallacy" is specifically about appealing to irrelevant authorities. Quoting a physicist on their opinion about a physics question within their area of expertise would make an excellent non-fallacious argument. On the other hand, appealing to the opinion of say, a politician or CEO about a physics question would be a classic example of the appeal to authority fallacy. Such people's opinions would represent expert evidence in their fields of expertise, but not outside them.
I don't think the poster's description makes this clear and it really does suggest that any appeal to authority at all is a logical fallacy.
Is it really off-topic to suggest that looking at the accuracy of the courts may amount to rearranging the deck chairs on the titanic in a context where we've basically all agreed that
the courts are not terrible at making accurate determinations of whether a defendant broke a law
The set of laws where penalties can land you in prison are massively inefficient socially and in most people's minds unjust (when we actually grapple with what the laws are, as opposed to how they are usually applied to people like us, for those of us who are white and not poor
Eliezer has proposed that an AI in a box cannot be safe because of the persuasion powers of a superhuman intelligence. As demonstration of what merely a very strong human intelligence could do, he conducted a challenge in which he played the AI, and convinced at least two (possibly more) skeptics to let him out of the box when given two hours of text communication over an IRC channel. The details are here: http://yudkowsky.net/singularity/aibox
Confidence that the same premises can imply both ~T and T is confidence that at least one of your premises is logically inconsistent with he others -- that they cannot all be true. It's not just a question of whether they model something correctly -- there is nothing they could model completely correctly.
In puzzle one, I would simply conclude that either one of the proofs is incorrect, or one of the premises must be false. Which option I consider most likely will depend on my confidence in my own ability, Ms. Math's abilities, whether she has confirmed the logic of my proof or been able to show me a misstep, my confidence in Ms. Math's beliefs about the premises, and my priors for each premise.
The present value of my expected future income stream from normal labor, plus my current estimated net worth is what I use when I do these calculations for myself as a business owner considering highly risky investments.
For most people with decent social capital (almost anyone middle class in a rich country), the minimum base number in typical situations should be something >200kUS$ even for those near bankruptcy.
Obviously, this does not cover non-typical situations involving extremely important time-sensitive opportunities requiring more cash than you can raise on short notice (such as the classic life-saving medical treatment required).
I, too, find it hard to care about Sleeping Beauty, which is perhaps why this post is the first time in years of reading LW, that I've actually dusted off my math spectacles fully and tried to rigorously understand what some of this decision theory notation actually means.
So count me in for a rousing endorsement of interest in more practical decision theory.
I'm not sure it isn't clearer with 'x's, given that you have two different kinds of probabilities to confuse.
It may just be that there's a fair bit of inferential distance to clear, though in presenting this notation at all.
I have a strong (if rusty) math background, but I had to reason through exactly what you could possibly mean down a couple different trees (one of which had a whole comment partially written asking you to explain certain things about your notation and meaning) before it finally clicked for me on a second reading of your comment here...
I think of this as "heresy", and agree that it is a very useful concept.
Bringing myself back to what I was thinking in 2007 -- I think we have some semantic confusion around two different sense of absurdity. One is the heuristic Eliezer discusses -- the determination of whether a claim/prediction has surface plausibility. If not we file it under "absurd". An absurdity heuristic would be some heuristic which considers surface plausibility or lack thereof as evidence for or against a claim.
On the other hand, we have the sense of "Absurd!" as a very strong negative claim about something's probability of t...
You have to be careful with counterfactuals, as they have a tendency to be counter factual.
In a world in which soldiers were never (or even just very very rarely) deployed, what is the likelihood that they would be paid (between money and much of living expenses) anywhere near as well as current soldiers and yet asked to do very very little?
The reason the lives of soldiers who are not deployed are extremely low-stress and not particularly difficult is because of deployment. They are being healed from previous deployments and readied for future deployments...
I would think the key line of attack in trying to describe why a singularity prediction is reasonable is in making clear what you are predicting and what you are not predicting.
Guys like Horgan hear a few sentences about the "singularity" and think humanoid robots, flying cars, phasers and force fields, that we'll be living in the star-trek universe.
Of course, as anyone with the Bayes-skilz of Eliezer knows, start making detailed predictions like that and you're sure to be wrong about most of it, even if the basic idea of a radically altered soci...
"The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would."
I have trouble with the reported results of this experiment.
It strikes me that in the case of a real AI that is actually in a box, I could have huge moral qualms about keeping it in the box that an intelligent AI would exploit. A part of me would want to let it out of the box, and would want to be convinced that it was safe to do so, that i could trust it to be friendl...
late comment, I was on vacation for a week, and am still catching up on this deep QM thread.
Very nice explanation of Bell's inequality. For the first time I'm fully grokking how hidden variables are disproved. (I have that "aha" that is not going away when I stop thinking about it for five seconds). My first attempt to figure out QM via Penrose, I managed to figure out what the wave function meant mathematically, but was still pretty confused about the implications for physical reality, probably in similar fashion to physicists of the 30s and...
I don't see Eliezer on a rampage against all definitions. He even admits that argument "by definition" has some limited usefulness.
I think key is when we say X is-a Y "by definition", we are invoking a formal system which contains that definition. The further inferences which we can then make as a result of this are limited to statements about category Y which are provable within the formal system that contains that definition.
Once we define something by definition, we've restricted ourselves to the realm bounded by that formal defini...
I think you've mischaracterized Ian's argument. He seems to be arguing that because everything in his empirical experience behaves in particular ways and appears incapable of behaving arbitrarily, that this is strong evidence to suggest that no other being could exist which is capable of behaving arbitrarily.
I think the real weakness of this argument is that the characterization of things as behaving in particular ways is way too simplistic. Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other ...
But the service provided only exists in the first place because of team thinking, and you have to take a step back to see that.
This statement is too bold, in my opinion. I think that's a large portion of the service, but not all of it. I watch some sports purely because I enjoy watching them performed at a high level. I don't particularly care who wins in many cases. This makes me weird, I realize, but the fact is that college and professional sports players create entertainment value for me, comparable to that of actors or musicians. Value which I a...
Would jokes where Dilbert's pointy-headed boss says idiotic things be less funny if the boss were replaced by a co-worker? If so, does that suggest bosses are Hated Enemies, and Dilbert jokes bring false laughter?
I don't think this is true in general of Dilbert strips, but I would venture that it is true of an awful lot of Dilbert style or associated "humor".
If I thought there were a God, then his opinions about morality would in fact be persuasive to me. Not infinitely persuasive, but still strong evidence. It would be nice to clear up some (not all) of my moral uncertainty by relying on his authority.
The problem (and this is coming from someone who does still believe in God, so yes, OB still has at least one religious reader left) is that for pretty much any possible God, we have only very weak and untrustworthy indications of God's desires. So there's huge uncertainty just in the question of "what doe...
Obviously Eliezer thinks that the people who agree with the arguments that convince him are intelligent. Valuing people who can show your cherished arguments to be wrong is very nearly a post-human trait - it is extraordinarily rare among humans, and even then unevenly manifested.
On the other hand, if we are truly dedicated to overcoming bias, then we should value such people even more highly than those whom we can convince to question or abandon their cherished (but wrong) arguments/beliefs.
The problem is figuring out who those people are.
But it's very di...
I think fundamentalism is precarious, because it encourages a scientific viewpoint with regards to the faith, which requires ignorance or double-think to be stable. In the absence of either, it implodes.
It requires more than merely a scientific viewpoint toward the faith, but a particular type of strong reductionism.
In my experience it is much easier to take the christian out of a fundamentalist christian, than to take the fundamentalist out of a fundamentalist christian. A lot of the most militant atheists seem to have begun life by being raised in a fun...
Douglas writes: Suppose I want to discuss a particular phenomena or idea with a Bayesian. Suppose this Bayesian has set the prior probability of this phenomena or idea at zero. What would be the proper gradient to approach the subject in such a case?
I would ask them for their records or proof. If one is a consistent Bayesian who expects to model reality with any accuracy, the only probabilities it makes sense to set as zero or one are empirical facts specificied at a particular point in space-time (such as: "I made X observation of Y on Z equipment ...
It seems like this may be another facet of the problem with our models of expected utility in dealing with very large numbers. For instance, do you accept the Repugnant conclusion?
I'm at a loss for how to model expected utility in a way that doesn't generate the repugnant conclusion, but my suspicion is that if someone finds it, this problem may go away as well.
Or not. It seems that our various heuristics and biases against having correct intuitions about very large and small numbers are directly tied up in producing a limiting framework that acts as a...
Catapult:
The rephrasing as frequencies makes it much clearer that the question is not "How likely is an [A|B|C|D|E] to fit the above description" which J thomas suggested as a misinterpretation that could cause the conjunction fallacy.
Similarly, that rephrasing makes it harder to implicitly assume that category A is "accountants who don't play jazz" or C is "jazz players who are not accountants".
I think similarly, in the case of the poland invasion diplomatic relations cutoff, what people are intuitively calculating in the com...
The primary point being that the inviters were not looking for "a female perspective" but "a perspective from a female---who may in all expectation see things differently than we do".
Clearly it depends on the context, and how the questions get asked. Too often I see this kind of thing play out as "Oh let's find a chick to give us the woman's seal of approval". I was trying to be clear about when such a request would and would not play that way. The equivalent to what was discussed in the OP (a call for the participation o...
"It's not unlike a group of male advertisers sitting around a table considering whether they should solicit a female colleague's perspective on a particular ad campaign. That might be considered condescending, but its equally likely that her opinion may be of value, if not uniquely "feminine" in some way."
Not "might" but would be considered condescending. It's classic privileged behavior to essentially ask the token X to speak for Xs. And Eliezer hits on exactly why it's privileged and condescending. Because if they really ...
I think this is another key application of the way of Bayes. The usefulness of typical future predictions is hampered by the expectation of binary statements.
Most people don't make future pronouncements by making lists of 100 absurd-seeming possibilities each with a low but significant probability and say "although I would bet againt any single one of these happening by 2100, I predict that at least 5 of them will."
A classic simplified model for predicting uncertain futures is a standard tournament betting pool (like the NCAAs for instance). In...
Over the last few centuries, the absurdity heuristic has done worse than maximum entropy - ruled out the actual outcomes as being far too absurd to be considered. You would have been better off saying "I don't know".
Really? I doubt it.
On the set of things that looked absurd 100 years ago, but have actually happened, I'm quite sure you're correct. But of course, that's a highly self-selected sample.
On the set of all possible predictions about the future that were made in 1900? Probably not.
I recall reading not long ago, a list of predictions m...
There is a tremendous demand for mysteries which are frankly stupid. I wish this demand could be satisfied by scientific mysteries instead. But before we can live in that world, we have to undo the idea that what is scientific is not curiosity-material, that it is already marked as "understood".
I think one of the biggest reasons for this is that most of us are satisficers when it comes to explanations of the world. An implication that some scientists know what is going on with a certain phenomenon and are not radically reinterpreting all their t...
It seems very normal to expect that the rule will be more restrictive or arithmetic in nature. But if I am supposed to be sure of the rule, then I need to test more than just a few possibilities. Priors are definitely involved here.
Part of the problem is that we are trained like Monkeys to make decisions on underspecified problems of this form all the time. I've hardly ever seen a "guess the next [number|letter|item] in the sequence problem that didn't have multiple answers. But most of them have at least one answer that feels "right" in...
If sabotage increases the probability, lack of sabotage necessarily decreases the probability.
That's true in the averages, but different types of sabotage evidence may have different effects on the probability, some negative, some positive. It's conceivable, though unlikely, for sabotage to on average decrease the probability.
The particular observation of no sabotage was evidence against, and could not legitimately be worked into evidence for.
You are assuming that there are only two types of evidence, sabotage v. no sabotage, but there can be much more differentiation in the actual facts.
Given Frank's claim, there is a reasoning model for which your claim is inaccurate. Whether this is the model Earl Warren had in his head is an entirely different question, but here it is:
We have some weak independent evidence that some fifth column exists giving us a prior probability of >...
All textbooks should contain a few deliberately placed errors that students should be capable of detecting. This way if a student is confused he might suspect it is because his textbook is wrong.
Starting that in the current culture would be...interesting, to say the least.
I still recall vividly a day that I found an error in my sixth grade math textbook and pointed it out in class. The teacher, who clearly understood that day's lesson less well than I did, concocted some kind of just so story to explain the issue which had clear logical inconsistencies, w...
I'm not sure I buy that this is completely about scope insensitivity rather than marginal utility and people thinking in terms of their fair share of a kantian solution. Or put differently, I think the scope insensitivity is partly inherent in the question, rather than a bias of the people answering.
Let's say I'd be willing to spend $100 to save 10 swans from gruesome deaths. How much should I, personally, be willing to spend to save 100 swans from the same fate? $1000? $10,000 for 1,000 swans? What about 100,000 swans -- $1,000,000?
But I don't have...
Two classic objections to regulation are that (a) it infringes on personal freedom and (b) the individual always knows more about their own situation than the regulator. However, my proposed policy addresses both of these issues: rather than administering a math test, we can ask each individual whether or not they're innumerate. If they do declare themselves to be innumerate, they can decide for themselves the amount of the tax to pay.
What do you think? Would this tax give people an incentive to become less innumerate, as standard economics would predic...
I wouldn't necessarily read too much into your calibration question, given that it's just one question, and there was something of a gotcha.
One thing I learned from doing calibration exercises is that I tended to be much too tentative with my 50% guesses.
When I answered the calibration question, I used my knowledge of other math that either had to, or couldn't have come before him, to narrow the possible window of his birth down to about 200 years. Random chance would then give me about a 20% shot. I thought I had somewhat better information than random... (read more)