Comment author: Brian_Tomasik 26 June 2014 03:03:55AM *  3 points [-]

Thanks, Jonah. I think skepticism about the dominance of the far future is actually quite compelling, such that I'm not certain that focusing on the far future dominates (though I think it's likely that it does on balance, but much less than I naively thought).

The strongest argument is just that believing we are in a position to influence astronomical numbers of minds runs contrary to Copernican intuitions that we should be typical observers. Isn't it a massive coincidence that we happen to be among a small group of creatures that can most powerfully affect our future light cone? Robin Hanson's resolution of Pascal's mugging relied on this idea.

The simulation-argument proposal is one specific way to hash out this Copernican intuition. The sim arg is quite robust and doesn't depend on the self-sampling assumption the way the doomsday argument does. We have reasonable a priori reasons for thinking there should be lots of sims -- not quite as strong as the arguments for thinking we should be able to influence the far future, but not vastly weaker.

Let's look at some sample numbers. We'll work in units of "number of humans alive in 2014," so that the current population of Earth is 1. Let's say the far future contains N humans (or human-ish sentient creatures), and a fraction f of those are sims that think they're on Earth around 2014. The sim arg suggests that Nf >> 1, i.e., we're probably in one of those sims. The probability we're not in such a sim is 1/(Nf+1), which we can approximate as 1/(Nf). Now, maybe future people have a higher intensity of experience i relative to that of present-day people. Also, it's much easier to affect the near future than the far future, so let e represent the amount of extra "entropy" that our actions face if they target the far future. For example, e = 10^-6 says there's a factor-of-a-million discount for how likely our actions are to actually make the difference we intend for the far future vs. if we had acted to affect the near term. This entropy can come from uncertainty about what the far future will look like, failures of goal preservation, or intrusion of black swans.

Now let's consider two cases -- one assuming no correlations among actors (CDT) and one assuming full correlations (TDT-ish).

CDT case:

  • If we help in the short run, we can affect something like 1 people (where "1" means "7 billion").
  • If we help in the long run, if we're not in a sim, we can affect N people, with an i experience-intensity multiple, with a factor of e for uncertainty/entropy in our efforts. But the probability we're not in a sim is 1/(Nf), so the overall expected value is 1/(Nf)Nie = ie/f.

It's not obvious that ie/f > 1. For instance, if f = 10^-4, i = 10^2, and e = 10^-6, this would equal 1. Hence it wouldn't be clear that targeting the far future is better than targeting the near term.

TDT-ish case:

  • There are Nf+1 copies of people (who think they're) on Earth in 2014, so if we help in the short run, we help all of those Nf+1 people because our actions are mirrored across our copies. Since Nf >> 1, we can approximate this as Nf.
  • If we help by taking far-future-targeting actions, even if we're in a sim, our actions can timelessly affect what happens in the basement, so we can have an impact regardless of whether we're in a sim or not. The future contains N people with i intensity factor, and there's e entropy on actions that try to do far-future stuff relative to short-term stuff. The expected value is Nie.

The ratio of long-term helping to short-term helping is Nie/(Nf) = ie/f, exactly the same as before. Hence, the uncertainty about whether the near- or far-future dominates persists.

I've tried these calculations with a few other tweaks, and something close to ie/f continues to pop out.

Now, this point is again of the "one relatively strong argument" variety, so I'm not claiming this particular elaboration is definitive. But it illustrates the types of ways that far-future-dominance arguments could be neglecting certain factors.

Note also that even if you think ie/f >> 1, it's still less than the 10^30 or whatever factor a naive far-future-dominance perspective might assume. Also, to be clear, I'm ignoring flow-through effects of short-term helping on the far future and just talking about the intrinsic value of the direct targets of our actions.

Comment author: Pablo_Stafforini 04 October 2016 11:26:12AM *  0 points [-]

it's much easier to affect the near future than the far future, so let e represent the amount of extra "entropy" that our actions face if they target the far future. For example, e = 10^-6 says there's a factor-of-a-million discount for how likely our actions are to actually make the difference we intend for the far future vs. if we had acted to affect the near-term.

In the past, when I expressed worries about the difficulties associated to far-future meme-spreading, which you favor as an alternative to extinction-risk reduction, you said you thought there was a significant chance of a singleton-dominated future. Such a singleton, you argued, would provide the necessary causal stability for targeted meme-spreading to successfully influence our distant descendants. But now you seem to be implying that, other things equal, far-future meme-spreading is several orders of magnitude less likely to succeed than short-term interventions (including interventions aimed at reducing near-term risk of extinction, which plausibly represents a significant fraction of total extinction risk). I find these two views hard to reconcile.

Comment author: Pablo_Stafforini 06 September 2016 02:58:04PM *  0 points [-]

Killian, Lewis M. "Social movements." Handbook of Modern Sociology. Chicago: Rand McNally (1964): 426-455.

[r/Scholar request]

Comment author: Pablo_Stafforini 06 September 2016 02:20:20PM *  0 points [-]

Retracting since article was found on r/Scholar.

Morris, Aldon & Cedric Herring (1987),Theory and research in social movements: A critical review, Annual Review of Political Science, vol. 2, pp. 137-98

[r/Scholar request]

Comment author: Pablo_Stafforini 23 July 2016 07:14:31AM *  0 points [-]

McGuire, W. J. (1969), The nature of attitudes and attitude change, in Elliot Aronson & Gardner Lindzey (eds.), The Handbook of Social Psychology, 2nd ed., Massachusetts: Addison-Wesley, vol. 3, pp. 136-314

Comment author: gwern 08 March 2016 05:40:22PM 2 points [-]
Comment author: Pablo_Stafforini 08 March 2016 07:26:12PM 0 points [-]

Ah, I thought I had searched Libgen but it seems I didn't. Thanks!

Comment author: Pablo_Stafforini 08 March 2016 04:58:51PM 0 points [-]

Landes, Joan B., The Public and the Private Sphere: A Feminist Reconsideration, in Joan B. Landes (ed.), Feminism, the Public and the Private, New York: Oxford University Press, 1998, ch. 5.

Comment author: Qiaochu_Yuan 10 June 2013 07:15:21AM *  20 points [-]

Most functions are not linear. This may seem too obvious to be worth mentioning, but it's very easy to assume that various functions that appear in real life are linear, e.g. to assume that if a little of something is good, then more of it is better, or if a little of something is bad, then more of it is even worse (apparently some people use the term "linear fallacy" for something like this assumption), or conversely in either case.

Nonlinearity is responsible for local optima that aren't global optima, which makes optimization a difficult task in general: it's not enough just to look at the direction in which you can improve the most by changing things a little (gradient ascent), but sometimes you might need to traverse an uncanny valley and change things a lot to get to a better local optimum, e.g. if you're at a point in your life where you've made all of the small improvements you can, you may need to do something drastic like quit your job and find a better one, which will temporarily make your life worse, in order to eventually make your life even better.

The reason variance in financial investments matters, even if you only care about expected utility, is that utility isn't a linear function of money. Your improvement in the ability to do something is usually not linear in the amount of time you put into practicing it (at some point you'll hit diminishing marginal returns). And so forth.

Comment author: Pablo_Stafforini 31 December 2015 12:09:35PM *  0 points [-]

Most functions are not linear. This may seem too obvious to be worth mentioning, but it's very easy to assume that various functions that appear in real life are linear, e.g. to assume that if a little of something is good, then more of it is better, or if a little of something is bad, then more of it is even worse (apparently some people use the term "linear fallacy" for something like this assumption), or conversely in either case.

Jordan Ellenberg discusses this phenomenon at length in How Not to Be Wrong: The Power of Mathematical Thinking. See here for some relevant quotes (a blog post by one of the targets of Ellenberg's criticism).

Comment author: tog 18 August 2015 05:12:20AM 1 point [-]

I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:

http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html

http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf

But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.

Comment author: Pablo_Stafforini 18 August 2015 06:43:43PM *  1 point [-]

On IQ, I strongly recommend Ian Deary's Intelligence: A Short Introduction (link to shared file in my Google Drive).

Comment author: Pablo_Stafforini 03 May 2015 04:51:35PM 15 points [-]

A prima facie argument in favour of the efficacy of prayer is […] to be drawn from the very general use of it. The greater part of mankind, during all the historic ages, has been accustomed to pray for temporal advantages. How vain, it may be urged, must be the reasoning that ventures to oppose this mighty consensus of belief! Not so. The argument of universality either proves too much, or else it is suicidal. It either compels us to admit that the prayers of Pagans, of Fetish worshippers and of Buddhists who turn praying wheels, are recompensed in the same way as those of orthodox believers; or else the general consensus proves that it has no better foundation than the universal tendency of man to gross credulity.

Francis Galton, ‘Statistical Inquiries into the Efficacy of Prayer’, Fortnightly Review, vol. 12, no. 68 (August, 1872), pp. 125–135

Comment author: jkaufman 14 March 2014 05:50:04PM 1 point [-]

I started rating happiness on a ten point scale in response to a randomized buzzer four months ago and am expecting a child in the next few weeks. I intend to keep up the sampling.

Comment author: Pablo_Stafforini 23 April 2015 06:08:52AM 2 points [-]

Any updates?

View more: Next