Killian, Lewis M. "Social movements." Handbook of Modern Sociology. Chicago: Rand McNally (1964): 426-455.
Retracting since article was found on r/Scholar.
Morris, Aldon & Cedric Herring (1987),Theory and research in social movements: A critical review, Annual Review of Political Science, vol. 2, pp. 137-98
McGuire, W. J. (1969), The nature of attitudes and attitude change, in Elliot Aronson & Gardner Lindzey (eds.), The Handbook of Social Psychology, 2nd ed., Massachusetts: Addison-Wesley, vol. 3, pp. 136-314
Ah, I thought I had searched Libgen but it seems I didn't. Thanks!
Landes, Joan B., The Public and the Private Sphere: A Feminist Reconsideration, in Joan B. Landes (ed.), Feminism, the Public and the Private, New York: Oxford University Press, 1998, ch. 5.
Most functions are not linear. This may seem too obvious to be worth mentioning, but it's very easy to assume that various functions that appear in real life are linear, e.g. to assume that if a little of something is good, then more of it is better, or if a little of something is bad, then more of it is even worse (apparently some people use the term "linear fallacy" for something like this assumption), or conversely in either case.
Nonlinearity is responsible for local optima that aren't global optima, which makes optimization a difficult task in general: it's not enough just to look at the direction in which you can improve the most by changing things a little (gradient ascent), but sometimes you might need to traverse an uncanny valley and change things a lot to get to a better local optimum, e.g. if you're at a point in your life where you've made all of the small improvements you can, you may need to do something drastic like quit your job and find a better one, which will temporarily make your life worse, in order to eventually make your life even better.
The reason variance in financial investments matters, even if you only care about expected utility, is that utility isn't a linear function of money. Your improvement in the ability to do something is usually not linear in the amount of time you put into practicing it (at some point you'll hit diminishing marginal returns). And so forth.
Most functions are not linear. This may seem too obvious to be worth mentioning, but it's very easy to assume that various functions that appear in real life are linear, e.g. to assume that if a little of something is good, then more of it is better, or if a little of something is bad, then more of it is even worse (apparently some people use the term "linear fallacy" for something like this assumption), or conversely in either case.
Jordan Ellenberg discusses this phenomenon at length in How Not to Be Wrong: The Power of Mathematical Thinking. See here for some relevant quotes (a blog post by one of the targets of Ellenberg's criticism).
I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:
http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html
http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf
But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.
On IQ, I strongly recommend Ian Deary's Intelligence: A Short Introduction (link to shared file in my Google Drive).
A prima facie argument in favour of the efficacy of prayer is […] to be drawn from the very general use of it. The greater part of mankind, during all the historic ages, has been accustomed to pray for temporal advantages. How vain, it may be urged, must be the reasoning that ventures to oppose this mighty consensus of belief! Not so. The argument of universality either proves too much, or else it is suicidal. It either compels us to admit that the prayers of Pagans, of Fetish worshippers and of Buddhists who turn praying wheels, are recompensed in the same way as those of orthodox believers; or else the general consensus proves that it has no better foundation than the universal tendency of man to gross credulity.
Francis Galton, ‘Statistical Inquiries into the Efficacy of Prayer’, Fortnightly Review, vol. 12, no. 68 (August, 1872), pp. 125–135
I started rating happiness on a ten point scale in response to a randomized buzzer four months ago and am expecting a child in the next few weeks. I intend to keep up the sampling.
Any updates?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Thanks, Jonah. I think skepticism about the dominance of the far future is actually quite compelling, such that I'm not certain that focusing on the far future dominates (though I think it's likely that it does on balance, but much less than I naively thought).
The strongest argument is just that believing we are in a position to influence astronomical numbers of minds runs contrary to Copernican intuitions that we should be typical observers. Isn't it a massive coincidence that we happen to be among a small group of creatures that can most powerfully affect our future light cone? Robin Hanson's resolution of Pascal's mugging relied on this idea.
The simulation-argument proposal is one specific way to hash out this Copernican intuition. The sim arg is quite robust and doesn't depend on the self-sampling assumption the way the doomsday argument does. We have reasonable a priori reasons for thinking there should be lots of sims -- not quite as strong as the arguments for thinking we should be able to influence the far future, but not vastly weaker.
Let's look at some sample numbers. We'll work in units of "number of humans alive in 2014," so that the current population of Earth is 1. Let's say the far future contains N humans (or human-ish sentient creatures), and a fraction f of those are sims that think they're on Earth around 2014. The sim arg suggests that Nf >> 1, i.e., we're probably in one of those sims. The probability we're not in such a sim is 1/(Nf+1), which we can approximate as 1/(Nf). Now, maybe future people have a higher intensity of experience i relative to that of present-day people. Also, it's much easier to affect the near future than the far future, so let e represent the amount of extra "entropy" that our actions face if they target the far future. For example, e = 10^-6 says there's a factor-of-a-million discount for how likely our actions are to actually make the difference we intend for the far future vs. if we had acted to affect the near term. This entropy can come from uncertainty about what the far future will look like, failures of goal preservation, or intrusion of black swans.
Now let's consider two cases -- one assuming no correlations among actors (CDT) and one assuming full correlations (TDT-ish).
CDT case:
It's not obvious that ie/f > 1. For instance, if f = 10^-4, i = 10^2, and e = 10^-6, this would equal 1. Hence it wouldn't be clear that targeting the far future is better than targeting the near term.
TDT-ish case:
The ratio of long-term helping to short-term helping is Nie/(Nf) = ie/f, exactly the same as before. Hence, the uncertainty about whether the near- or far-future dominates persists.
I've tried these calculations with a few other tweaks, and something close to ie/f continues to pop out.
Now, this point is again of the "one relatively strong argument" variety, so I'm not claiming this particular elaboration is definitive. But it illustrates the types of ways that far-future-dominance arguments could be neglecting certain factors.
Note also that even if you think ie/f >> 1, it's still less than the 10^30 or whatever factor a naive far-future-dominance perspective might assume. Also, to be clear, I'm ignoring flow-through effects of short-term helping on the far future and just talking about the intrinsic value of the direct targets of our actions.
In the past, when I expressed worries about the difficulties associated to far-future meme-spreading, which you favor as an alternative to extinction-risk reduction, you said you thought there was a significant chance of a singleton-dominated future. Such a singleton, you argued, would provide the necessary causal stability for targeted meme-spreading to successfully influence our distant descendants. But now you seem to be implying that, other things equal, far-future meme-spreading is several orders of magnitude less likely to succeed than short-term interventions (including interventions aimed at reducing near-term risk of extinction, which plausibly represents a significant fraction of total extinction risk). I find these two views hard to reconcile.