All of Michael_Sullivan's Comments + Replies

I wouldn't necessarily read too much into your calibration question, given that it's just one question, and there was something of a gotcha.

One thing I learned from doing calibration exercises is that I tended to be much too tentative with my 50% guesses.

When I answered the calibration question, I used my knowledge of other math that either had to, or couldn't have come before him, to narrow the possible window of his birth down to about 200 years. Random chance would then give me about a 20% shot. I thought I had somewhat better information than random... (read more)

the running 11 year average of global temperature has not flattened since 1990, but continued upward at almost the same pace with only a moderate decrease in slope since the outlier 1998 year. The 11 years 2000-2010 global mean temperature is significantly higher than the 10 years 1990-2000.

That is not "flat since the 90s". The only way to get "flat since the 90s" is to compare 1998 to various more recent years noting that it was nearly as hot as 2005 and 2010 etc. and slightly hotter than other years in the 2000s, as if 1 year ... (read more)

2buybuydandavis
I think an honest eyeball will recognize a plateau in temperatures going back to 2003. It would be the highest plateau, but still a plateau. http://en.wikipedia.org/wiki/File:Satellite_Temperatures.png

Don't worry, I just did reread it, and it is just as I remembered. A lot of applause lights for the crowd that believes that the current state of climate science is driven by funding pressure from the US government DoE. His "argument" is based almost exclusively on the tone of popular texts, and anecdotal evidence that Joe Romm was an asshole and pushing bad policy at DoE during the Clinton administration. Considerations of what happened during the 8 years of a GWB administration that was actively hostile to the people JoeR favored are ignore... (read more)

6[anonymous]
"Flat since the 90s" is a statement about the rate of change of temperature. "8 of 10 hottest years on record [...] have occurred since then" is a statement about the value of the temperature. These are almost entirely unrelated factoids, are completely compatible with one another, and I wish people would stop presenting the latter as some kind of slamdunk refutation of the former. It doesn't support the warmist case, it weakens it.

Taken.

As last year, I would prefer different wording on the P(religion) question. "More or less" is so vague as to allow for a lot of very different answers depending on how I interpret it, and I didn't even properly consider the "revealed" distinction noted in a comment here.

I appreciate the update on the singularity estimate for those of us whose P(singularity) is between epsilon and 50+epsilon.

I still wonder if we can tease out the differences between current logistical/political problems and the actual effectiveness of the science ... (read more)

I am a massive N on the meyers briggs astrology test, yes I scored 96% for openness on the big-5.

I suspect our responses to questions like "I am an original thinker" have a lot to do with our social context. Right now, the people I run into day to day are fairly representative of the general population with little to skew toward toward the intellectual or original other than "people who hold down decent jobs, or did so until they retired". It doesn't take a great lack of humility to realize that compared to most of these people, I am... (read more)

You say that "There will never be any such thing", but your reasons tell only why the problem is hard and much harder than one might think at first, not why it is impossible. Surely the kind of tech needed for self-driving cars, perhaps an order of magnitude more complicated, would make it possible to have safe, convenient, cheap flying cars or their functional equivalent.

At worst, the reasons you state would make it AI-complete, and even that seems unreasonably pessimistic.

2Richard_Kennaway
I'll cop to "never" being an exaggeration. The safety issue is a showstopper right now, and will be until computer control reaches the point where cars and aircraft are routinely driven by computer, and air traffic control is also done by computer. Not before mid-century for this. Then you have the problem of millions -- hundreds of millions? -- of vehicles in the air travelling on independent journeys. That's a problem that completely dwarfs present-day air traffic control. More computers needed. They are also going to be using phenomenal amounts of fuel. Leaving aside sci-fi dreams of convenient new physics, those Moller craft have to be putting at least 100kW into just hovering. (Back of envelope calculation based on 1 ton weight and 25 m/s downdraft velocity, and ignoring gasoline-to-whirling-fan losses.) Where's that coming from? Cold fusion? "Never" turns into "not this century", by my estimate. Of course, if civilisation falls instead, "never" really does mean never, at least, never by humans.

It's only a crazy thing to do if you are pretty sure you will need/want the insurance for the rest of your life. If you aren't sure, then you are paying a bunch of your investment money for insurance you might decide you don't need (and in fact, you definitely won't need financially once you have self-funded).

If you are convinced that cryonics is a good investment, and don't have the money to fund it out of current capital, then that seems like a good reason to buy some kind of life insurance, and a universal life policy is probably one of the better ways... (read more)

" It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked."

Generally it's a good idea to think twice and reread before assuming that a published and frequently cited paper is saying something so obviously stupid.

Your edit doesn't help much at all. You talk about what others "seem to claim", but the argument that you have cla... (read more)

0Ghatanathoah
It doesn't seem any less obviously stupid to me then the more moderate conclusion you claim that Parfit has drawn. If you really believe that creating a new lives barely worth living (or "lives someone would barely choose to live," in your words) is better than increasing the utility of existing lives then the next logical step is to confiscate all the resources people are using to live standards of life higher than "a life someone would barely choose to live" and use them to make more people instead. That would result in a society identical to the previous one except that it has a lower quality of life and a higher population. Perhaps it would have sounded a little better if I had said "It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A, providing that Z's larger population is large enough that it has higher total utility than A." I disagree with this of course, it seems to me that total and average utility are both valuable, and one shouldn't dominate the other. Also, I'm sorry to have retracted the comment you commented on, I did that before I noticed you had commented on it. I decided that I could explain my ideas more briefly and clearly in a new comment and posted that one in its place.

Not even close. The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.

The conclusion of what Partfit actually demonstrated goes something more like this:

For any coherent mathematical definition of utility such that there is some additive functions which allows you to sum the utility of many people to determine U(population), the following paradox exists:

Given any world with positive utility A, there exists at least one other world B with more people, and less ... (read more)

4Ghatanathoah
Even if that is the case I think that that strawman is commonly accepted enough that it needs to be taken down. I believe that creating a life worth living and enhancing the lives of existing people to both be contributory values that form Overall Value. Furthermore, these values have diminishing returns relative to each other, so in a world with low population creating new people is more valuable, but in a world with a high population improving the lives of existing people is of more value. Then I shut up and multiply and get the conclusion that the optimal society is one that has a moderately sized population and a high average quality of life. For every world with a large population leading lives barely worth living there exists another, better world with a lower population and higher quality of life. Now, there may be some "barely worth living" societies so huge that their contribution to overall value is larger than a much smaller society with a higher standard of living, even considering diminishing returns. However, that "barely worth living" society would in turn be much worse than a society with a somewhat smaller population and a higher standard of living. For instance, a planet full of lives barely worth living might be better than an island full of very high quality lives. However, it would be much worse than a planet with a somewhat smaller population, but a higher quality of life. I'm not interesting in maximizing total utility. I'm interested in maximizing overall value, of which total utility is only one part. To me it would, in many cases, be morally better to use the resources that would be used to create a "life that someone would choose to have" to instead improve the lives of existing people so that they are above that threshold. That would contribute more to overall value, and therefore make an even bigger improvement in the world. It's not that it wouldn't improve the world. It's that it would improve the world less than enhancing the ut

My understanding is that the "appeal to authority fallacy" is specifically about appealing to irrelevant authorities. Quoting a physicist on their opinion about a physics question within their area of expertise would make an excellent non-fallacious argument. On the other hand, appealing to the opinion of say, a politician or CEO about a physics question would be a classic example of the appeal to authority fallacy. Such people's opinions would represent expert evidence in their fields of expertise, but not outside them.

I don't think the poster's description makes this clear and it really does suggest that any appeal to authority at all is a logical fallacy.

0Jack
I agree the poster is wrong. Appeals to authority can also be non-fallacious but of very weak inductive strength: for example, when the authority holds the minority opinion for her field. They are also fallacious as deductive arguments.
-1Random832
"Science is the belief in the ignorance of experts."

Is it really off-topic to suggest that looking at the accuracy of the courts may amount to rearranging the deck chairs on the titanic in a context where we've basically all agreed that

  1. the courts are not terrible at making accurate determinations of whether a defendant broke a law

  2. The set of laws where penalties can land you in prison are massively inefficient socially and in most people's minds unjust (when we actually grapple with what the laws are, as opposed to how they are usually applied to people like us, for those of us who are white and not poor

... (read more)
-1Eugine_Nier
Can you cite evidence for this? Most of the evidence for this is based on arguing that P(conviction|African descent) > P(conviction|Eurasian descent) and dismissing anyone who points out that P(guilty|African descent) > P(guilty|Eurasian descent) as a racist.

Eliezer has proposed that an AI in a box cannot be safe because of the persuasion powers of a superhuman intelligence. As demonstration of what merely a very strong human intelligence could do, he conducted a challenge in which he played the AI, and convinced at least two (possibly more) skeptics to let him out of the box when given two hours of text communication over an IRC channel. The details are here: http://yudkowsky.net/singularity/aibox

Confidence that the same premises can imply both ~T and T is confidence that at least one of your premises is logically inconsistent with he others -- that they cannot all be true. It's not just a question of whether they model something correctly -- there is nothing they could model completely correctly.

In puzzle one, I would simply conclude that either one of the proofs is incorrect, or one of the premises must be false. Which option I consider most likely will depend on my confidence in my own ability, Ms. Math's abilities, whether she has confirmed the logic of my proof or been able to show me a misstep, my confidence in Ms. Math's beliefs about the premises, and my priors for each premise.

0AlexMennen
Suppose I have three axioms: A, B, and C. A: x=5 B: x+y=4 C: 2x+y=6 Which axiom is logically inconsistent with the others? (A, B), (B, C), and (A, C) are all consistent systems, so I can't declare any of the axioms to be false, just that for any particular model of anything remotely interesting, at least one of them must not apply.

The present value of my expected future income stream from normal labor, plus my current estimated net worth is what I use when I do these calculations for myself as a business owner considering highly risky investments.

For most people with decent social capital (almost anyone middle class in a rich country), the minimum base number in typical situations should be something >200kUS$ even for those near bankruptcy.

Obviously, this does not cover non-typical situations involving extremely important time-sensitive opportunities requiring more cash than you can raise on short notice (such as the classic life-saving medical treatment required).

I, too, find it hard to care about Sleeping Beauty, which is perhaps why this post is the first time in years of reading LW, that I've actually dusted off my math spectacles fully and tried to rigorously understand what some of this decision theory notation actually means.

So count me in for a rousing endorsement of interest in more practical decision theory.

I'm not sure it isn't clearer with 'x's, given that you have two different kinds of probabilities to confuse.

It may just be that there's a fair bit of inferential distance to clear, though in presenting this notation at all.

I have a strong (if rusty) math background, but I had to reason through exactly what you could possibly mean down a couple different trees (one of which had a whole comment partially written asking you to explain certain things about your notation and meaning) before it finally clicked for me on a second reading of your comment here... (read more)

0Vaniver
I definitely think that should be a post of its own. Thanks for the feedback! It's helpful when planning out a sequence to know where I should focus extra attention.

I think of this as "heresy", and agree that it is a very useful concept.

Bringing myself back to what I was thinking in 2007 -- I think we have some semantic confusion around two different sense of absurdity. One is the heuristic Eliezer discusses -- the determination of whether a claim/prediction has surface plausibility. If not we file it under "absurd". An absurdity heuristic would be some heuristic which considers surface plausibility or lack thereof as evidence for or against a claim.

On the other hand, we have the sense of "Absurd!" as a very strong negative claim about something's probability of t... (read more)

You have to be careful with counterfactuals, as they have a tendency to be counter factual.

In a world in which soldiers were never (or even just very very rarely) deployed, what is the likelihood that they would be paid (between money and much of living expenses) anywhere near as well as current soldiers and yet asked to do very very little?

The reason the lives of soldiers who are not deployed are extremely low-stress and not particularly difficult is because of deployment. They are being healed from previous deployments and readied for future deployments... (read more)

2datadataeverywhere
I agree that this scenario is pretty unlikely; it seems at least possible if there was a high-level policy change that hadn't caught up to military funding and structure, but made active troop deployment very unlikely. Your second to last paragraph disagrees with this; does the US military really shrink that much when we have fewer wars going on? China seems much more the model of a country with a large military that rarely is deployed, and they do seem to match your description; lots of manual labor, disaster relief, building infrastructure, etc., with less competitive pay. I agree that this is the natural balance for a country that's not engaging in wars on a regular basis. This might not have been true, and probably won't be true even once we get back to peace time, but if it was, it seems like a pretty good reason to join, and follows the OPs intention. Still not my recommendation!

I would think the key line of attack in trying to describe why a singularity prediction is reasonable is in making clear what you are predicting and what you are not predicting.

Guys like Horgan hear a few sentences about the "singularity" and think humanoid robots, flying cars, phasers and force fields, that we'll be living in the star-trek universe.

Of course, as anyone with the Bayes-skilz of Eliezer knows, start making detailed predictions like that and you're sure to be wrong about most of it, even if the basic idea of a radically altered soci... (read more)

"The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would."

I have trouble with the reported results of this experiment.

It strikes me that in the case of a real AI that is actually in a box, I could have huge moral qualms about keeping it in the box that an intelligent AI would exploit. A part of me would want to let it out of the box, and would want to be convinced that it was safe to do so, that i could trust it to be friendl... (read more)

late comment, I was on vacation for a week, and am still catching up on this deep QM thread.

Very nice explanation of Bell's inequality. For the first time I'm fully grokking how hidden variables are disproved. (I have that "aha" that is not going away when I stop thinking about it for five seconds). My first attempt to figure out QM via Penrose, I managed to figure out what the wave function meant mathematically, but was still pretty confused about the implications for physical reality, probably in similar fashion to physicists of the 30s and... (read more)

I don't see Eliezer on a rampage against all definitions. He even admits that argument "by definition" has some limited usefulness.

I think key is when we say X is-a Y "by definition", we are invoking a formal system which contains that definition. The further inferences which we can then make as a result of this are limited to statements about category Y which are provable within the formal system that contains that definition.

Once we define something by definition, we've restricted ourselves to the realm bounded by that formal defini... (read more)

I think you've mischaracterized Ian's argument. He seems to be arguing that because everything in his empirical experience behaves in particular ways and appears incapable of behaving arbitrarily, that this is strong evidence to suggest that no other being could exist which is capable of behaving arbitrarily.

I think the real weakness of this argument is that the characterization of things as behaving in particular ways is way too simplistic. Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other ... (read more)

But the service provided only exists in the first place because of team thinking, and you have to take a step back to see that.

This statement is too bold, in my opinion. I think that's a large portion of the service, but not all of it. I watch some sports purely because I enjoy watching them performed at a high level. I don't particularly care who wins in many cases. This makes me weird, I realize, but the fact is that college and professional sports players create entertainment value for me, comparable to that of actors or musicians. Value which I a... (read more)

Would jokes where Dilbert's pointy-headed boss says idiotic things be less funny if the boss were replaced by a co-worker? If so, does that suggest bosses are Hated Enemies, and Dilbert jokes bring false laughter?

I don't think this is true in general of Dilbert strips, but I would venture that it is true of an awful lot of Dilbert style or associated "humor".

If I thought there were a God, then his opinions about morality would in fact be persuasive to me. Not infinitely persuasive, but still strong evidence. It would be nice to clear up some (not all) of my moral uncertainty by relying on his authority.

The problem (and this is coming from someone who does still believe in God, so yes, OB still has at least one religious reader left) is that for pretty much any possible God, we have only very weak and untrustworthy indications of God's desires. So there's huge uncertainty just in the question of "what doe... (read more)

Obviously Eliezer thinks that the people who agree with the arguments that convince him are intelligent. Valuing people who can show your cherished arguments to be wrong is very nearly a post-human trait - it is extraordinarily rare among humans, and even then unevenly manifested.

On the other hand, if we are truly dedicated to overcoming bias, then we should value such people even more highly than those whom we can convince to question or abandon their cherished (but wrong) arguments/beliefs.

The problem is figuring out who those people are.

But it's very di... (read more)

I think fundamentalism is precarious, because it encourages a scientific viewpoint with regards to the faith, which requires ignorance or double-think to be stable. In the absence of either, it implodes.

It requires more than merely a scientific viewpoint toward the faith, but a particular type of strong reductionism.

In my experience it is much easier to take the christian out of a fundamentalist christian, than to take the fundamentalist out of a fundamentalist christian. A lot of the most militant atheists seem to have begun life by being raised in a fun... (read more)

Douglas writes: Suppose I want to discuss a particular phenomena or idea with a Bayesian. Suppose this Bayesian has set the prior probability of this phenomena or idea at zero. What would be the proper gradient to approach the subject in such a case?

I would ask them for their records or proof. If one is a consistent Bayesian who expects to model reality with any accuracy, the only probabilities it makes sense to set as zero or one are empirical facts specificied at a particular point in space-time (such as: "I made X observation of Y on Z equipment ... (read more)

It seems like this may be another facet of the problem with our models of expected utility in dealing with very large numbers. For instance, do you accept the Repugnant conclusion?

I'm at a loss for how to model expected utility in a way that doesn't generate the repugnant conclusion, but my suspicion is that if someone finds it, this problem may go away as well.

Or not. It seems that our various heuristics and biases against having correct intuitions about very large and small numbers are directly tied up in producing a limiting framework that acts as a... (read more)

Catapult:

The rephrasing as frequencies makes it much clearer that the question is not "How likely is an [A|B|C|D|E] to fit the above description" which J thomas suggested as a misinterpretation that could cause the conjunction fallacy.

Similarly, that rephrasing makes it harder to implicitly assume that category A is "accountants who don't play jazz" or C is "jazz players who are not accountants".

I think similarly, in the case of the poland invasion diplomatic relations cutoff, what people are intuitively calculating in the com... (read more)

The primary point being that the inviters were not looking for "a female perspective" but "a perspective from a female---who may in all expectation see things differently than we do".

Clearly it depends on the context, and how the questions get asked. Too often I see this kind of thing play out as "Oh let's find a chick to give us the woman's seal of approval". I was trying to be clear about when such a request would and would not play that way. The equivalent to what was discussed in the OP (a call for the participation o... (read more)

"It's not unlike a group of male advertisers sitting around a table considering whether they should solicit a female colleague's perspective on a particular ad campaign. That might be considered condescending, but its equally likely that her opinion may be of value, if not uniquely "feminine" in some way."

Not "might" but would be considered condescending. It's classic privileged behavior to essentially ask the token X to speak for Xs. And Eliezer hits on exactly why it's privileged and condescending. Because if they really ... (read more)

I think this is another key application of the way of Bayes. The usefulness of typical future predictions is hampered by the expectation of binary statements.

Most people don't make future pronouncements by making lists of 100 absurd-seeming possibilities each with a low but significant probability and say "although I would bet againt any single one of these happening by 2100, I predict that at least 5 of them will."

A classic simplified model for predicting uncertain futures is a standard tournament betting pool (like the NCAAs for instance). In... (read more)

Over the last few centuries, the absurdity heuristic has done worse than maximum entropy - ruled out the actual outcomes as being far too absurd to be considered. You would have been better off saying "I don't know".

Really? I doubt it.

On the set of things that looked absurd 100 years ago, but have actually happened, I'm quite sure you're correct. But of course, that's a highly self-selected sample.

On the set of all possible predictions about the future that were made in 1900? Probably not.

I recall reading not long ago, a list of predictions m... (read more)

1Luke_A_Somers
The problem is that by declaring something "Absurd" you're making a very strong bet against it. You're going to lose a fair number of these bets. Suppose calling something absurd merely means it's 1% probable. If you're right about that 90% of the time, each one you get wrong costs you a factor of 10 on your accuracy, far more than you gain from ascribing the extra 9% probability to the other 9 cases you happened to be right. And 1% is high enough few would call it truly absurd. Calling something absurd is asking to be smacked hard (in terms of accuracy) if you're wrong - and feeling safe about it.

There is a tremendous demand for mysteries which are frankly stupid. I wish this demand could be satisfied by scientific mysteries instead. But before we can live in that world, we have to undo the idea that what is scientific is not curiosity-material, that it is already marked as "understood".

I think one of the biggest reasons for this is that most of us are satisficers when it comes to explanations of the world. An implication that some scientists know what is going on with a certain phenomenon and are not radically reinterpreting all their t... (read more)

It seems very normal to expect that the rule will be more restrictive or arithmetic in nature. But if I am supposed to be sure of the rule, then I need to test more than just a few possibilities. Priors are definitely involved here.

Part of the problem is that we are trained like Monkeys to make decisions on underspecified problems of this form all the time. I've hardly ever seen a "guess the next [number|letter|item] in the sequence problem that didn't have multiple answers. But most of them have at least one answer that feels "right" in... (read more)

If sabotage increases the probability, lack of sabotage necessarily decreases the probability.

That's true in the averages, but different types of sabotage evidence may have different effects on the probability, some negative, some positive. It's conceivable, though unlikely, for sabotage to on average decrease the probability.

The particular observation of no sabotage was evidence against, and could not legitimately be worked into evidence for.

You are assuming that there are only two types of evidence, sabotage v. no sabotage, but there can be much more differentiation in the actual facts.

Given Frank's claim, there is a reasoning model for which your claim is inaccurate. Whether this is the model Earl Warren had in his head is an entirely different question, but here it is:

We have some weak independent evidence that some fifth column exists giving us a prior probability of >... (read more)

All textbooks should contain a few deliberately placed errors that students should be capable of detecting. This way if a student is confused he might suspect it is because his textbook is wrong.

Starting that in the current culture would be...interesting, to say the least.

I still recall vividly a day that I found an error in my sixth grade math textbook and pointed it out in class. The teacher, who clearly understood that day's lesson less well than I did, concocted some kind of just so story to explain the issue which had clear logical inconsistencies, w... (read more)

I'm not sure I buy that this is completely about scope insensitivity rather than marginal utility and people thinking in terms of their fair share of a kantian solution. Or put differently, I think the scope insensitivity is partly inherent in the question, rather than a bias of the people answering.

Let's say I'd be willing to spend $100 to save 10 swans from gruesome deaths. How much should I, personally, be willing to spend to save 100 swans from the same fate? $1000? $10,000 for 1,000 swans? What about 100,000 swans -- $1,000,000?

But I don't have... (read more)

4Amit Bhalla
While I agree with your point, I think the big takeaway here is that humans are not always capable of understanding massive scales. Our universe is one such example where our minds just cannot comprehend galactic scales. Yes, there is a pulling of the trigger, as you say, but I think a more reasonable learning here is that after certain lengths numbers just stop making sense to us.
6ajithr
Exactly what I was thinking while I was reading this! Perhaps the example used isn't a good one.
2Adam Zerner
You point out a potential flaw in the reasoning for concluding 'scope insensitivity'. But you then seem to go off into saying that 'scope insensitivity is incorrect', and I don't think you supported that claim enough. Remember, reversed stupidity is not intelligence.

Two classic objections to regulation are that (a) it infringes on personal freedom and (b) the individual always knows more about their own situation than the regulator. However, my proposed policy addresses both of these issues: rather than administering a math test, we can ask each individual whether or not they're innumerate. If they do declare themselves to be innumerate, they can decide for themselves the amount of the tax to pay.

What do you think? Would this tax give people an incentive to become less innumerate, as standard economics would predic... (read more)