Dark Arts 101: Winning via destruction and dualism

-13 PhilGoetz 21 September 2013 01:53AM

Recalling first that life is a zero-sum game, it is immediately obvious that the quickest and easiest path to success is not to accomplish things yourself—that's a game for heroes and other suckers—but to tear down the accomplishments and reputations of others. Destruction is easy. The difficulty lies in constructing a situation so that that destruction is to your net benefit.

continue reading »

Terminology point rationality vs rationalism.

0 beoShaffer 08 September 2013 04:42AM

 Rationalism should not be confused with rationality, nor with rationalization.

-Wikipedia article on rationalism 

I frequently see people using rationalism in place of rationality.  Usually other commenters understand them, however I believe that using the word rationality is superior.  The Less Wrong tag line is "A community blog devoted to refining the art of human rationality".  On the other hand, rationalism is the philosophical term for a very different epistemological position. Furthermore, -the -ism suffix has some undesirable connotations.  

The Rhythm of Disagreement

12 Eliezer_Yudkowsky 01 June 2008 08:18PM

Followup toA Premature Word on AI, The Modesty Argument

Once, during the year I was working with Marcello, I passed by a math book he was reading, left open on the table.  One formula caught my eye (why?); and I thought for a moment and said, "This... doesn't look like it can be right..."

Then we had to prove it couldn't be right.

Why prove it?  It looked wrong; why take the time for proof?

Because it was in a math book.  By presumption, when someone publishes a book, they run it past some editors and double-check their own work; then all the readers get a chance to check it, too.  There might have been something we missed.

But in this case, there wasn't.  It was a misprinted standard formula, off by one.

I once found an error in Judea Pearl's Causality - not just a misprint, but an actual error invalidating a conclusion in the text.  I double and triple-checked, the best I was able, and then sent an email to Pearl describing what I thought the error was, and what I thought was the correct answer.  Pearl confirmed the error, but he said my answer wasn't right either, for reasons I didn't understand and that I'd have to have gone back and done some rereading and analysis to follow.  I had other stuff to do at the time, unfortunately, and couldn't expend the energy.  And by the time Pearl posted an expanded explanation to the website, I'd forgotten the original details of the problem...  Okay, so my improved answer was wrong.

Why take Pearl's word for it?  He'd gotten the original problem wrong, and I'd caught him on it - why trust his second thought over mine?

continue reading »

The Universal Medical Journal Article Error

6 PhilGoetz 29 April 2014 05:57PM

(Oops. I forgot this was moved to Discussion.)

TL;DR:  When people read a journal article that concludes, "We have proved that it is not the case that for every X, P(X)", they generally credit the article with having provided at least weak evidence in favor of the proposition ∀x !P(x).  This is not necessarily so.

 

Authors using statistical tests are making precise claims, which must be quantified correctly.  Pretending that all quantifiers are universal because we are speaking English is one error.  It is not, as many commenters are claiming, a small error.  ∀x !P(x) is very different from !∀x P(x).

 

A more-subtle problem is that when an article uses an F-test on a hypothesis, it is possible (and common) to fail the F-test for P(x) with data that supports the hypothesis P(x).  The 95% confidence level was chosen for the F-test in order to count false positives as much more expensive than false negatives.  Applying it therefore removes us from the world of Bayesian logic.  You cannot interpret the failure of an F-test for P(x) as being even weak evidence for not P(x).

continue reading »

Boltzmann Brains and Anthropic Reference Classes (Updated)

-4 pragmatist 04 June 2012 04:04AM

Summary: There are claims that Boltzmann brains pose a significant problem for contemporary cosmology. But this problem relies on assuming that Boltzmann brains would be part of the appropriate reference class for anthropic reasoning. Is there a good reason to accept this assumption?

Nick Bostrom's Self Sampling Assumption (SSA) says that when accounting for indexical information, one should reason as if one were a random sample from the set of all observer's in one's reference class. As an example of the scientific usefulness of anthropic reasoning, Bostrom shows how the SSA rules out a particular cosmological model suggested by Boltzmann. Boltzmann was trying to construct a model that is symmetric under time reversal, but still accounts for the pervasive temporal asymmetry we observe. The idea is that the universe is eternal and, at most times and places, at thermodynamic equilibrium. Occasionally, there will be chance fluctuations away from equilibrium, creating pockets of low entropy. Life can only develop in these low entropy pockets, so it is no surprise that we find ourselves in such a region, even though it is atypical.

The objection to this model is that smaller fluctuations from equilibrium will be more common. In particular, fluctuations that produce disembodied brains floating in a high entropy soup with the exact brain state I am in right now (called Boltzmann brains) would be vastly more common than fluctuations that actually produce me and the world around me. If we reason according to SSA, the Boltzmann model predicts I am one of those brains and all my experiences are spurious. Conditionalizing on the model, the probability that my experiences are not spurious is minute. But my experiences are in fact not spurious (or at least, I must operate under the assumption that they are not if I am to meaningfully engage in scientific inquiry). So the Boltzmann model is heavily disconfirmed. [EDIT: As AlexSchell points out, this is not actually Bostrom's argument. The argument has been made by others. Here, for example.]

Now, no one (not even Boltzmann) actually believed the Boltzmann model, so this might seem like an unproblematic result. Unfortunately, it turns out that our current best cosmological models also predict a preponderance of Boltzmann brains. They predict that the universe is evolving towards an eternally expanding cold de Sitter phase. Once the universe is in this phase, thermal fluctuations of quantum fields will lead to an infinity of Boltzmann brains. So if the argument against the original Boltzmann model is correct, these cosmological models should also be rejected. Some people have drawn this conclusion. For instance, Don Page considers the anthropic argument strong evidence against the claim that the universe will last forever. This seems like the SSA's version of Bostrom's Presumptuous Philosopher objection to the Self Indication Assumption, except here we have a presumptuous physicist. If your intuitions in the Presumptuous Philosopher case lead you to reject SIA, then perhaps the right move in this case is to reject SSA.

But maybe SSA can be salvaged. The rule specifies that one need only consider observers in one's reference class. If Boltzmann brains can be legitimately excluded from the reference class, then the SSA does not threaten cosmology. But Bostrom claims that the reference class must at least contain all observers whose phenomenal state is subjectively indistinguishable from mine. If that's the case, then all Boltzmann brains in brain states sufficiently similar to mine such that there is no phenomenal distinction must be in my reference class, and there's going to be a lot of them.

Why accept this subjective indistinguishability criterion though? I think the intuition behind it is that if two observers are subjectively indistinguishable (it feels the same to be either one), then they are evidentially indistinguishable, i.e. the evidence available to them is the same. If A and B are in the exact same brain state, then, according to this claim, A has no evidence that she is in fact A and not B. And in this case, it is illegitimate for her to exclude B from her anthropic reference class. For all she knows, she might be B!

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history. For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs. There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him? And if we have different beliefs, then I can know things the brain doesn't know. Which means I can have evidence the brain doesn't have. Subjective indistinguishability does not entail evidential indistinguishability.

So at least this argument for including all subjectively indistinguishable observers in one's reference class fails. Is there another good reason for this constraint I haven't considered?

Update: There seems to be a common misconception arising in the comments, so I thought I'd address it up here. A number of commenters are equating the Boltzmann brain problem with radical skepticism. The claim is that the problem shows that we can't really know we are not Boltzmann brains. Now this might be a problem some people are interested in. It is not one that I am interested in, nor is it the problem that exercises cosmologists. The Boltzmann brain hypothesis is not just a physically plausible variant of the Matrix hypothesis.

The purported problem for cosmology is that certain cosmological models, in conjunction with the SSA, predict that I am a Boltzmann brain. This is not a problem because it shows that I am in fact a Boltzmann brain. It is a problem because it is an apparent disconfirmation of the cosmological model. I am not actually a Boltzmann brain, I assure you. So if a model says that it is highly probable I am one, then the observation that I am not stands as strong evidence against the model. This argument explicitly relies on the rejection of radical skepticism.

Are we justified in rejecting radical skepticism? I think the answer is obviously yes, but if you are in fact a skeptic then I guess this won't sway you. Still, if you are a skeptic, your response to the Boltzmann brain problem shouldn't be, "Aha, here's support for my skepticism!" It should be "Well, all of the physics on which this problem is based comes from experimental evidence that doesn't actually exist! So I have no reason to take the problem seriously. Let me move on to another imaginary post."

Why I'm Skeptical About Unproven Causes (And You Should Be Too)

31 peter_hurford 29 July 2013 09:09AM

Since living in Oxford, one of the centers of the "effective altruism" movement, I've been spending a lot of time discussing the classic “effective altruism” topic -- where it would be best to focus our time and money.

Some people here seem to think that the most important thing we should be focusing our time and money on are speculative projects, or projects that promise a very high impact, but involve a lot of uncertainty.  One such very common example is "existential risk reduction", or attempts to make a long-term far future for humans more likely, say by reducing the chance of things that would cause human extinction.

I do agree that the far future is the most important thing to consider, by far (see papers by Nick Bostrom and Nick Beckstead).  And I do think we can influence the far future.  I just don't think we can do it in a reliable way.  All we have are guesses about what the far future will be like and guesses about how we can affect it. All of these ideas are unproven, speculative projects, and I don't think they deserve the main focus of our funding.

While I waffled in cause indecision for a while, I'm now going to resume donating to GiveWell's top charities, except when I have an opportunity to use a donation to learn more about impact.  Why?  My case is that speculative causes, or any cause with high uncertainty (reducing nonhuman animal suffering, reducing existential risk, etc.) requires that we rely on our commonsense to evaluate them with naīve cost-effectiveness calculations, and this is (1) demonstrably unreliable with a bad track record, (2) plays right into common biases, and (3) doesn’t make sense based on how we ideally make decisions.  While it’s unclear what long-term impact a donation to a GiveWell top charity will have, the near-term benefit is quite clear and worth investing in.

 

Focusing on Speculative Causes Requires Unreliable Commonsense

How can we reduce the chance of human extinction? It just makes sense that if we fund cultural exchange programs between the US and China, there will be more goodwill for the other within each country, and therefore the countries will be less likely to nuke each other. Since nuclear war would likely be very bad, it's of high value to fund cultural exchange programs, right?

Let's try another. The Machine Intelligence Research Institute (MIRI) thinks that someday artificial intelligent agents will become better than humans at making AIs. At this point, AI will build a smarter AI which will build an even smarter AI, and -- FOOM! -- we have a superintelligence. It's important that this superintelligence be programmed to be benevolent, or things will likely be very bad. And we can stop this bad event by funding MIRI to write more papers about AI, right?

Or how about this one? It seems like there will be challenges in the far future that will be very daunting, and if humanity handles them wrong, things will be very bad. But if people were better educated and had more resources, surely they'd be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?

These three examples are very common appeals to commonsense.  But commonsense hasn't worked very well in the domain of finding optimal causes.

 

Can You Pick the Winning Social Program?

Benjamin Todd makes this point well in "Social Interventions Gone Wrong", where he provides a quiz with eight social programs and asks readers to guess whether they succeeded or failed.

I'll wait for you to take the quiz first... doo doo doo... la la la...

Ok, welcome back. I don't know how well you did, but success on this quiz is very rare, and this poses problems for commonsense.  Sure, I'll grant you that Scared Straight sounds pretty suspicious. But the Even Start Family Literacy Program? It just makes sense that providing education to boost literacy skills and promote parent-child literacy activities should boost literacy rates, right? Unfortunately, it was wrong. Wrong in a very counter-intuitive way. There wasn't an effect.  

 

GiveWell and Commonsense's Track Record of Failure

Commonsense actually has a track record of failure. GiveWell has been talking about this for ages.  Every time GiveWell has found an intervention hyped by commonsense notions of high-impact and they've looked at it further, they've ended up disappointed.

The first was the Fred Hollows Foundation. A lot of people had been repeating the figure that the Fred Hollows Foundation could cure blindness for $50. But GiveWell found that number suspect.

The second was VillageReach. GiveWell originally put them as their top charity and estimated them as saving a life for under $1000. But further investigation kept leading them to revise their estimate until ultimately they weren't even sure if VillageReach had an impact at all.

Third, there is deworming. Originally, deworming was announced as saving a year of healthy life (DALY) for every $3.41 spent. But when GiveWell dove into the spreadsheets that resulted in that number, they found five errors. When the dust settled, the $3.41 figure was found to actually be off by a factor of 100. It was revised to $326.43.

Why shouldn't we expect this trend to not be the case in other areas where calculations are even looser and numbers are even less settled, like efforts devoted to speculative causes? Our only recourse is to fall back on interventions that are actually studied.

 

People Are Notoriously Bad At Predicting the (Far) Future

Cost-effectiveness estimates also frequently require making predictions about the future. Existential risk reduction, for example, requires making predictions about what will happen in the far future, and how your actions are likely to effect events hundreds of years down the road. Yet, experts are notoriously bad at making these kinds of predictions.

James Shanteau found in "Competence in Experts: The Role of Task Characteristics" (see also Kahneman and Klein's "Conditions for Intuitive Expertise: A Failure to Disagree") that experts perform well when thinking about static stimuli, thinking about things, and when there is feedback and objective analysis available. Furthermore, experts perform pretty badly when thinking about dynamic stimuli, thinking about behavior, and feedback and objective analysis are unavailable.

Predictions about existential risk reduction and the far future are firmly in the second category. So how can we trust our predictions about our impact on the far future? Our only recourse is to fall back on interventions that we can reliably predict, until we get better at prediction (or invest money in getting better at making predictions).

 

Even Broad Effects Require Specific Attempts

One potential resolution to this problem is to argue for “broad effects” rather than “specific attempts”.  Perhaps it’s difficult to know whether a particular intervention will go well or mistaken to focus entirely on Friendly AI, but surely if we improved incentives and norms in academic work to better advance human knowledge (meta-research), improved education, or advocated for effective altruism, the far future would be much better equipped to handle threats.

I agree that these broad effects would make the far future better and I agree that it’s possible to implement these broad effects and change the far future.  The problem, however, is it can’t be done in an easy or well understood way.  Any attempt to implement a broad effect would require a specific action that has an unknown expectation of success and unknown cost-effectiveness.  It’s definitely beneficial to advocate for effective altruism, but could this be done in a cost-effective way?  A way that’s more cost-effective at producing welfare than AMF?  How would you know?

In order to accomplish these broad effects, you’d need specific organizations and interventions to channel your time and money into.  And by picking these specific organizations and interventions, you’re losing the advantage of broad effects and tying yourself to particular things with poorly understood impact and no track record to evaluate. 

 

Focusing on Speculative Causes Plays Into Our Biases

We've now known for quite a long time that people are not all that rational. Instead, human thinking fails in very predictable and systematic ways.  Some of these ways make us less likely to take speculative causes seriously, such as ambiguity aversion, the absurdity heuristic, scope neglect, and overconfidence bias.

But there’s also a different side of the coin, with biases that might make people think badly about existential risk:

Optimism bias. People generally think things will turn out better than they actually will. This could lead people to think that their projects will have a higher impact than they actually will, which would lead to higher estimates of cost-effectiveness than is reasonable.

Control bias. People like to think they have more control over things than they actually do. This plausibly also includes control over the far future. Therefore, people are probably biased into thinking they have more control over the far future than they actually do, leading to higher estimates of ability to influence the future than is reasonable.

"Wow factor" bias. People seem attracted to more impressive claims. Saving a life for $2500 through a malaria bed net seems much more boring compared to the chance of saving the entire world by averting a global catastrophe. Within the Effective Altruist / LessWrong community, existential risk reduction is cool and high status, whereas averting global poverty is not. This might lead to more endorsement of existential risk reduction than is reasonable.

Conjunction fallacy.  People have a problem assessing probability properly when there are many steps involved, each of which has a chance of not happening. Ten steps, each with an independent 90% success rate, has only a 35% chance of success.  Focusing on the far future seems to involve that a lot of largely independent events happen the way that is predicted. This would mean people are worse at estimating their chances of helping the far future, creating higher cost-effectiveness estimates than is reasonable.

Selection bias.  When trying to find trends in history that are favorable for affecting the far future, some examples can be provided.  However, this is because we usually hear about the interventions that end up working, whereas all the failed attempts to influence the far future are never heard of again.  This creates a very skewed sample that can negatively bias our thinking about our success of influencing the far future.

 

It’s concerning there are numerous biases both weighted in favor and weighted against speculative causes, and this means we must tread carefully when assessing their merits.  However, I would strongly expect biases to be even worse in favor of speculative causes rather than against them, because speculative causes lack the available feedback and objective evidence needed to help insulate against bias, whereas a focus on global health does not.

 

Focusing on Speculative Causes Uses Bad Decision Theory

Furthermore, not only is the case for speculative causes undermined by a bad track record and possible cognitive biases, but the underlying decision theory seems suspect in a way that's difficult to place.         

 

Would you play a lottery with no stated odds?

Imagine another thought experiment -- you're asked to play a lottery. You have to pay $2 to play, but you have a chance at winning $100. Do you play?

Of course, you don't know, because you're not given odds. Rationally, it makes sense to play any lottery where you expect to come out ahead more often than not. If the lottery is a coin flip, it makes sense to pay $2 to have a 50/50 shot to win $100, since you'd expect to win $50 on average, and come ahead $48 each time. With a sufficiently high reward, even a one in a million chance is worth it. Pay $2 for a 1/1M chance of winning $1B, and you'd expect to come out ahead by $998 each time.

But $2 for the chance to win $100, without knowing what the chance is? Even if you had some sort of bounds, like you knew the odds had to be at least 1/150 and at most 1/10, though you could be off by a little bit. Would you accept that bet?

Such a bet seems intuitively uninviting to me, yet this is the bet that speculative causes offer me.

 

"Conservative Orders of Magnitude" Arguments

In response to these considerations, I've seen people endorsing speculative causes look at their calculations and remark that even if their estimate were off by 1000x, or three orders of magnitude, they still would be on solid ground for high impact, and there's no way they're actually off by three orders of magnitude. However, Nate Silver's The Signal and the Noise: Why So Many Predictions Fail — but Some Don't offers a cautionary tale:

Moody’s, for instance, went through a period of making ad hoc adjustments to its model in which it increased the default probability assigned to AAA-rated securities by 50 percent. That might seem like a very prudent attitude: surely a 50 percent buffer will suffice to account for any slack in one’s assumptions? It might have been fine had the potential for error in their forecasts been linear and arithmetic. But leverage, or investments financed by debt, can make the error in a forecast compound many times over, and introduces the potential of highly geometric and nonlinear mistakes.

Moody’s 50 percent adjustment was like applying sunscreen and claiming it protected you from a nuclear meltdown—wholly inadequate to the scale of the problem. It wasn’t just a possibility that their estimates of default risk could be 50 percent too low: they might just as easily have underestimated it by 500 percent or 5,000 percent. In practice, defaults were two hundred times more likely than the ratings agencies claimed, meaning that their model was off by a mere 20,000 percent.

Silver points out that when estimating how safe mortgage backed securities were, the difference between assuming defaults are perfectly uncorrelated and defaults are perfectly correlated is a difference of 160,000x in your risk estimate -- or five orders of magnitude.

If these kinds of five-orders-of-magnitude errors are possible in a realm that has actual feedback and is moderately understood, how do we know the estimates for cost-effectiveness are safe for speculative causes that are poorly understood and offer no feedback?  Again, our only recourse is to fall back on interventions that we can reliably predict, until we get better at prediction.

 

Value of Information, Exploring, and Exploiting

Of course, there still is one important aspect of this problem that has not been discussed -- value of information -- or the idea that sometimes it’s worth doing something just to learn more about how the world works.  This is important in effective altruism too, where we focus specifically on “giving to learn”, or using our resources to figure out more about the impact of various causes.

I think this is actually really important and is not a victim to any of my previous arguments, because we’re not talking about impact, but rather learning value.  Perhaps one could look to an "explore-exploit model", or the idea that we achieve the best outcome when we spend a lot of time exploring first (learning more about how to achieve better outcomes) before exploiting (focusing resources on achieving the best outcome we can).  Therefore, whenever we have an opportunity to “explore” further or learn more about what causes have high impact, we should take it.

 

Learning in Practice

Unfortunately, in practice, I think these opportunities are very rare.  Many organizations that I think are “promising” and worth funding further to see what their impact looks like do not have sufficiently good self-measurement in place to actually assess their impact or sufficient transparency to provide that information, therefore making it difficult to actually learn from them.  And on the other side of things, many very promising opportunities to learn more are already fully funded.  One must be careful to ensure that it’s actually one’s marginal dollar that is getting marginal information.

 

The Typical Donor

Additionally, I don’t think the typical donor is in a very good position to assess where there is high value of information or have the time and knowledge to act upon this information once it is acquired.  I think there’s a good argument for people in the “effective altruist” movement to perhaps make small investments in EA organizations and encourage transparency and good measurement in their operations to see if they’re successfully doing what they claim (or potentially create an EA startup themselves to see if it would work, though this carries large risks of further splitting the resources of the movement).

But even that would take a very savvy and involved effective altruist to pull off.  Assessing the value of information on more massive investments like large-scale research or innovation efforts would be significantly more difficult, beyond the talent and resources of nearly all effective altruists, and are probably left to full-time foundations or subject-matter experts.

 

GiveWell’s Top Charities Also Have High Value of Information

As Luke Muehlhauser mentions in "Start Under the Streetlight, Then Push Into the Shadows", lots of lessons can be learned only by focusing on the easiest causes first, even if we have strong theoretical reasons to expect that they won’t end up being the highest impact causes once we have more complete knowledge.

We can use global health cost-effectiveness considerations as practice for slowly and carefully moving into the more complex and less understood domains.  There even are some very natural transitions, such as beginning to look at "flow through effects" of reducing disease in the third-world and beginning to look at how more esoteric things affect the disease burden, like climate change.  Therefore, even additional funding for GiveWell’s top charities has high value of information.  And notably, GiveWell is beginning this "push" through GiveWell Labs.

 

Conclusion

The bottom line is that sometimes things look too good to be true.  Therefore, I should expect that the actual impact of speculative causes that make large promises, upon a thorough investigation, will be much lower.

And this has been true in other domains. People are notoriously bad at estimating the effects of causes in both the developed world and developing world, and those are the causes that are near to us, provide us with feedback, and are easy to predict. Yet, from the Even Start Family Literacy Program to deworming estimates, our commonsense has failed us.

Add to that the fact that we should expect ourselves to perform even worse at predicting the far future. Add to that optimism bias, control bias, "wow factor" bias, and the conjunction fallacy, which make it difficult for us to think realistically about speculative causes. And then add to that considerations in decision theory, and whether we would bet on a lottery with no stated odds.

When all is said and done, I'm very skeptical of speculative projects.  Therefore, I think we should be focused on exploring and exploiting.  We should do whatever we can to fund projects aimed at learning more, when those are available, but be careful to make sure they actually have learning value.  And when exploring isn’t available, we should exploit what opportunities we have and fund proven interventions.

But don’t confuse these two concepts and fund causes intended for learning because of their actual impact value.  I’m skeptical about these causes actually being high impact, though I’m open to the idea that they might be and look forward to funding them in the future when they become better proven.     

-

Followed up in: "What Would It Take To 'Prove' A Skeptical Cause" and "Where I've Changed My Mind on My Approach to Speculative Causes".

This was also cross-posted to my blog and to effective-altruism.com.

I'd like to thank Nick Beckstead, Joey Savoie, Xio Kikauka, Carl Shulman, Ryan Carey,  Tom Ash, Pablo Stafforini, Eliezer Yudkowsky, and Ben Hoskin for providing feedback on this essay, even if some of them might strongly disagree with it's conclusion.

Stop Using LessWrong: A Practical Interpretation of the 2012 Survey Results

-37 aceofspades 30 December 2012 10:00PM

Link to those results: http://lesswrong.com/lw/fp5/2012_survey_results/

I've been basically lurking this site for more than a year now and it's incredible that I have actually taken anything at all on this site seriously, let alone that at least thousands of others have. I have never received evidence that I am less likely to be overconfident about things than people in general or that any other particular person on this site is.

Yet in spite of this apparently 3.7% of people answering the survey have actually signed up for cryonics which is surely greater than the percent of people in the entire world signed up for cryonics. The entire idea seems to be taken especially seriously on this site. Evidently 72.9% of people here are at least considering signing up. I think the chance of cryonics working is trivial, for all practical purposes indistinguishable from zero (the expected value of the benefit is certainly not worth several hundred thousand dollars in future value considerations). Other people here apparently disagree, but if the rest of the world is undervaluing cryonics at the moment then why do those here with privileged information not invest heavily in the formation of new for-profit cryonics organizations, or start them alone, or invest in technology which will soon develop to make the revival of cryonics patients possible? If the rest of the world is underconfident about these ideas, then these investments would surely have an enormous expected rate of return.

There is also a question asking about the relative likelihood of different existential risks, which seems to imply that any of these risks are especially worth considering. This is not really a fault of the survey itself, as I have read significant discussion on this site related to these ideas. In my judgment this reflects a grand level of overconfidence in the probabilities of any of these occurring. How many people responding to this survey have actually made significant personal preparations for survival, like a fallout shelter with food and so on which would actually be useful under most of the different scenarios listed? I generously estimate 5% have made any such preparations.

I also see mentioned in the survey and have read on this site material related to in my view meaningless counterfactuals. The questions on dust specks vs torture and Newcomb's Problem are so unlikely to ever be relevant in reality that I view discussion about them as worthless.

My judgment of this site as of now is that way too much time is spent discussing subjects of such low expected value (usually because of absurdly low expected probability of occurring) for using this site to be worthwhile. In fact I hypothesize that this discussion actually causes overconfidence related to such things happening, and at a minimum I have seen insufficient evidence for the value of using this site to continue doing so.

What if "status" IS a terminal value for most people?

18 handoflixue 24 December 2012 08:31PM

[Inspired by a few of the science bits in HP:MOR, and far more so by the discussions between Draco and Harry about "social skills". Shared because I suspect it's an insight some people would benefit from.]

One of the more prominent theories on the evolution of human intelligence suggests that humans involved intelligence, not to deal with their environment, but rather to deal with each other. A small intellectual edge would foster competition, and it would result in the sort of recursive, escalating loop that's required to explain why we're so SUBSTANTIALLY smarter than every other species on the planet.

If you accept that premise, it's obvious that intelligence should, naturally, come with a desire to compete against other humans. It should be equally obvious from looking at human history that, indeed, we seem to do exactly that.

Posit, then, that, linked to intelligence, there's a trait for politics - using intelligence to compete against other humans, to try and establish dominance via cunning instead of brawn.

And, like everything that the Blind Idiot God Evolution has created, imagine that there are humans who LACK this trait for politics, but still have intelligence.

Think about the humans who, instead of looking inwards at humanity for competition, instead turn outwards to the vast uncaring universe of physics and chemistry. Other humans are an obtainable target - a little evolutionary push, and your tribe can outsmart any other tribe. The universe is not nearly so easily cowed, though. The universe is, often, undefeatable, or at least, we have not come close to mastering it. Six thousand years and people still die to storms and drought and famine. Six thousand years, and we have just touched on the moon, just begun to even SEE other planets that might contain life like ours.

I never understood other people before, because I'm missing that trait.

And I finally, finally, understand that this trait even exists, and what it must BE like, to have the trait.

We are genetic, chemical beings. I believe this with every ounce of myself. There isn't a soul that defies physics, there is not a consciousness that defies neurology. The world, even ourselves, can be measured. Anger comes from a part of this mixture, as does happiness and love. They are not lesser for this. They are not!

This is not an interlude. It is woven in to the meaning of what I realized. If you have this trait, then part of your values, as fundamental to yourself as eating and breathing and drinking, is the desire for status, to assert a certain form of dominance. Intelligence can almost be measured by status and cunning, and those who try to cheat and use crass physical violence are indeed generally condemned for it.

I don't have this trait. I don't value status in and of itself. It's useful, because it lets me do other things. It opens doors. So I invest in still having status, but status is not a goal; Status is to me, as a fork is to hunger - merely a means to an end.

So I have never, not once in my life, been able to comprehend the simple truth: 90% of the people I meet, quite possibly more, value status, as an intrinsic thing. Indeed, they are meant to use their intelligence as a tool to obtain this status. It is how we rose to where we are in the world.

I don't know what to make of this. It means everything I'd pieced together about people is utterly, utterly wrong, because it assumed that they all valued truth, and understanding - the pursuits of intelligence when you don't have the political trait.

I am, for a moment, deeply, deeply lost.

But, I notice, I am no longer confused.

Terminology suggestion: Say "degrees utility" instead of "utils" to prompt affine thinking

11 Sniffnoy 19 May 2013 08:03AM

A common mistake people make with utility functions is taking individual utility numbers as meaningful, and performing operations such as adding them or doubling them.  But utility functions are only defined up to positive affine transformation.

Talking about "utils" seems like it would encourage this sort of mistake; it makes it sound like some sort of quantity of stuff, that can be meaningfully added, scaled, etc.  Now the use of a unit -- "utils" -- instead of bare real numbers does remind us that the scale we've picked is arbitrary, but it doesn't remind us that the zero we've picked is also arbitrary, and encourages such illegal operations as addition and scaling.  It suggests linear, not affine.

But there is a common everyday quantity which we ordinarily measure with an affine scale, and that's temperature.  Now, in fact, temperatures really do have an absolute zero (and if you make sufficient use natural units, they have an absolute scale, as well), but generally we measure temperature with scales that were invented before that fact was recognized.  And so while we may have Kelvins, we have "degrees Fahrenheit" or "degrees Celsius".

If you've used these scales long enough you recognize that it is meaningless to e.g. add things measured on these scales, or to multiply them by scalars.  So I think it would be a helpful cognitive reminder to say something like "degrees utility" instead of "utils", to suggest an affine scale like we use for temperature, rather than a linear scale like we use for length or time or mass.

The analogy isn't entirely perfect, because as I've mentioned above, temperature actually can be measured on a linear scale (and with sufficient use of natural units, an absolute scale); but the point is just to prompt the right style of thinking, and in everyday life we usually think of temperature as an (ordered) affine thing, like utility.

As such I recommend saying "degrees utility" instead of "utils".  If there is some other familiar quantity we also tend to use an affine scale for, perhaps an analogy with that could be used instead or as well.

It's not like anything to be a bat

15 Yvain 27 March 2010 02:32PM

...at least not if you accept a certain line of anthropic argument.

Thomas Nagel famously challenged the philosophical world to come to terms with qualia in his essay "What is it Like to Be a Bat?". Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. Even if we deduce all the physical principles behind echolocation, even if we could specify the movement of every atom in a bat's senses and nervous system that represents its knowledge of where an echolocated insect is, we still have no idea what it's like to feel a subjective echolocation quale.

Anthropic reasoning is the idea that you can reason conditioning on your own existence. For example, the Doomsday Argument says that you would be more likely to exist in the present day if the overall number of future humans was medium-sized instead of humongous, therefore since you exist in the present day, there must be only a medium-sized number of future humans, and the apocalypse must be nigh, for values of nigh equal to "within a few hundred years or so".

The Buddhists have a parable to motivate young seekers after enlightenment. They say - there are zillions upon zillions of insects, trillions upon trillions of lesser animals, and only a relative handful of human beings. For a reincarnating soul to be born as a human being, then, is a rare and precious gift, and an opportunity that should be seized with great enthusiasm, as it will be endless eons before it comes around again.

Whatever one thinks of reincarnation, the parable raises an interesting point. Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.

continue reading »

View more: Prev | Next