Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gjm 28 February 2017 11:59:24AM 0 points [-]

OK, I guess. I have to say that the main impression I'm getting from this exchange is that you wanted to say "boo Eliezer"; it seems like if you wanted to make an actual usefully constructive point you'd have been somewhat more explicit in your original comment. ("Eliezer wrote this in 1999: [...]. I know that Eliezer has since repudiated a lot of his opinions and thought processes of that period, but if his opinions were that badly wrong in 1999 then we shouldn't take them too seriously now either." or whatever.)

I will vigorously defend anyone's right to say "boo Eliezer" or "yay Eliezer", but don't have much optimism about getting a useful outcome from a conversation that begins that way, and will accordingly drop it now.

Comment author: Pablo_Stafforini 03 March 2017 06:52:07AM 0 points [-]

Thanks for the feedback. I agree that a comment worded in the manner you suggest would have communicated my point more effectively.

Comment author: gjm 26 February 2017 09:03:31PM 0 points [-]

I hadn't realised anyone was arguing for not treating Eliezer's current predictions with caution. I can't imagine why anyone wouldn't treat anyone's predictions with caution in this field.

Comment author: Pablo_Stafforini 26 February 2017 09:09:25PM *  0 points [-]

My point is that these early pronouncements are (limited) evidence that we should treat Eliezer's predictions with more caution that we would otherwise.

Comment author: gjm 26 February 2017 02:46:02AM 4 points [-]

Hasn't Eliezer said, on every occasion since the beginning of LW when the opportunity has arisen, that Eliezer-in-1999 was disastrously wrong and confused about lots of important things?

(I don't know whether present-day-Eliezer thinks 18-years-ago-Eliezer was wrong about this particular thing, but I would be cautious about taking things he said that long ago as strongly indicative of his present opinions.)

Comment author: Pablo_Stafforini 26 February 2017 08:46:45PM *  0 points [-]

Yes, I am aware that this is what Eliezer has said, and I wasn't implying that those early statements reflect Eliezer's current thinking. There is a clear difference between "Eliezer believed this in the past, so he must believe it at present" and "Eliezer made some wrong predictions in the past, so we must treat his current predictions with caution". Eliezer is entitled to ask his readers not to assume that his past beliefs reflect those of his present self, but he is not entitled to ask them not to hold him responsible for having once said stuff that some may think was ill-judged. (If Eliezer had committed serious crimes at the age of 18, it would be absurd for him to now claim that we should regard that person as a different individual who also happens to be called 'Eliezer Yudkowsky'. Epistemic responsibility seems analogous to moral responsibility in this respect.)

Comment author: CarlShulman 01 August 2013 07:15:05PM *  19 points [-]

People differ in their estimates within MIRI. Eliezer has not published a detailed explanation of his estimates, although he has published many of his arguments for his estimates.

For myself, I think the cause of AI risk reduction, in total and over time, has a worthwhile small-to-medium probability of making an astronomical difference on our civilization's future (and a high probability that the future will be very powerfully shaped by artificial intelligence in a way that can be affected by initial conditions). But the impact of MIRI in particular has to be a far smaller subset of the expected impact of the cause as a whole, in light of its limited scale and capabilities relative to the relevant universes (total AI research, governments, etc), the probability that AI is not close enough for MIRI to be very relevant, the probability that MIRI's approach turns out irrelevant, uncertainty over the sign of effects due to contributions to AI progress, future AI risk efforts/replaceability, and various other drag factors.

ETA: To be clear, I think that MIRI's existence, relative to the counterfactual in which it never existed, has been a good thing and reduced x-risk in my opinion, despite not averting a "medium probability," e.g. 10%, of x-risk.

ETA2: Probabilities matter because there are alternative uses of donations and human capital.

I have just spent a month in England interacting extensively with the EA movement here. Donors concerned with impact on the long-run future are considering donations to all of the following (all of these are from talks with actual people making concrete short-term choices; in addition to donations, people are also considering career choices post-university):

  • 80,000 Hours, the Center for Effective Altruism and other organizations that are helping altruists to improve their careers, coordination, information, and do movement building; some specifically mention the Center For Applied Rationality; these organizations also improve non-charity options, e.g. 80k helping people going into scientific funding agencies and political careers where they will be in a position to affect research and policy reactions to technologies relevant to x-risk and other trajectory changes
  • AMF/GiveWell's other recommended charities to keep GiveWell and the EA movement growing (GiveWell's growth in particular has been meteoric, with less extreme but still rapid growth in other EA institutions such as Giving What We can and CEA), while actors like GiveWell Labs, Paul Christiano, and Nick Beckstead and others at FHI, investigate the intervention options and cause prioritization, followed by organization-by-organization analysis of the GiveWell variety, laying the groundwork for massive support for the interventions and organizations identified by such processes as most effective in terms of their far future impact
  • Finding ways to fund such evaluation with RFMF, e.g. by paying for FHI or CEA hires to work on them
  • The FHI's other work
  • A donor-advised fund investing the returns until such evaluations or more promising opportunities present themselves or are elicited by the fund, including both known options for which no organization with RFMF or adequate quality exists, and unknown future options; some possible applications include, e.g. convening panels of independent scientific experts to evaluate key technical claims about future technologies, extensions of the DAGGRE forecasting methods, a Bayesian aggregation algorithm that greatly improves extraction of scientific expert opinion or science courts that could mobilize much more talent and resources to neglected problems with good cases, some key steps in biotech enhancement, AI safety research when AI is better understood, and more

This Paul Christiano post discusses the virtues of the donor-advised fund/"Fund for the Future" approach; Giving What We Can has already set up a charitable trust to act as a donor-advised fund in the UK, with one coming soon in the US, and Fidelity already offers a standardized donor-advised fund in America (DAFs allow one to claim tax benefits of donation immediately and then allow the donation to compound); there was much discussion this month about the details of setting up a DAF dedicated to far future causes (the main logistical difficulties are setting up the decision criteria, credibility, and maximum protection from taxation and disruption)

Comment author: Pablo_Stafforini 25 February 2017 10:17:05PM 0 points [-]

Eliezer has not published a detailed explanation of his estimates, although he has published many of his arguments for his estimates.

Eliezer wrote this in 1999:

My current estimate, as of right now, is that humanity has no more than a 30% chance of making it, probably less. The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.

Comment author: realitygrill 17 January 2011 10:10:06PM 5 points [-]

Subject: Economics

Recommendation: Introduction to Economic Analysis (www.introecon.com)

This is a very readable (and free) microecon book, and I recommend it for clarity and concision, analyzing interesting issues, and generally taking a more sophisticated approach - you know, when someone further ahead of you treats you as an intelligent but uninformed equal. It could easily carry someone through 75% of a typical bachelor's in economics. I've also read Case & Fair and Mankiw, which were fine but stolid, uninspiring texts.

I'd also recommend Wilkinson's An Introduction to Behavioral Economics as being quite lucid. Unfortunately it is the only textbook out on behavioral econ as of last year, so I can't say it's better than others.

Comment author: Pablo_Stafforini 25 February 2017 01:50:10AM *  0 points [-]

Luke's post, based on this recommendation, reads as follows:

On economics, realitygrill recommends McAfee's Introduction to Economic Analysis over Mankiw's Macroeconomics and Case & Fair's Principles of Macroeconomics

I believe the books realitygrill is referring to are instead Mankiw's Principles of Microeconomics and Case & Fair's Principles of Microeconomics, since McAfee's is a microeconomics (not a macroeconomics) textbook.

Comment author: Brian_Tomasik 26 June 2014 03:03:55AM *  3 points [-]

Thanks, Jonah. I think skepticism about the dominance of the far future is actually quite compelling, such that I'm not certain that focusing on the far future dominates (though I think it's likely that it does on balance, but much less than I naively thought).

The strongest argument is just that believing we are in a position to influence astronomical numbers of minds runs contrary to Copernican intuitions that we should be typical observers. Isn't it a massive coincidence that we happen to be among a small group of creatures that can most powerfully affect our future light cone? Robin Hanson's resolution of Pascal's mugging relied on this idea.

The simulation-argument proposal is one specific way to hash out this Copernican intuition. The sim arg is quite robust and doesn't depend on the self-sampling assumption the way the doomsday argument does. We have reasonable a priori reasons for thinking there should be lots of sims -- not quite as strong as the arguments for thinking we should be able to influence the far future, but not vastly weaker.

Let's look at some sample numbers. We'll work in units of "number of humans alive in 2014," so that the current population of Earth is 1. Let's say the far future contains N humans (or human-ish sentient creatures), and a fraction f of those are sims that think they're on Earth around 2014. The sim arg suggests that Nf >> 1, i.e., we're probably in one of those sims. The probability we're not in such a sim is 1/(Nf+1), which we can approximate as 1/(Nf). Now, maybe future people have a higher intensity of experience i relative to that of present-day people. Also, it's much easier to affect the near future than the far future, so let e represent the amount of extra "entropy" that our actions face if they target the far future. For example, e = 10^-6 says there's a factor-of-a-million discount for how likely our actions are to actually make the difference we intend for the far future vs. if we had acted to affect the near term. This entropy can come from uncertainty about what the far future will look like, failures of goal preservation, or intrusion of black swans.

Now let's consider two cases -- one assuming no correlations among actors (CDT) and one assuming full correlations (TDT-ish).

CDT case:

  • If we help in the short run, we can affect something like 1 people (where "1" means "7 billion").
  • If we help in the long run, if we're not in a sim, we can affect N people, with an i experience-intensity multiple, with a factor of e for uncertainty/entropy in our efforts. But the probability we're not in a sim is 1/(Nf), so the overall expected value is 1/(Nf)Nie = ie/f.

It's not obvious that ie/f > 1. For instance, if f = 10^-4, i = 10^2, and e = 10^-6, this would equal 1. Hence it wouldn't be clear that targeting the far future is better than targeting the near term.

TDT-ish case:

  • There are Nf+1 copies of people (who think they're) on Earth in 2014, so if we help in the short run, we help all of those Nf+1 people because our actions are mirrored across our copies. Since Nf >> 1, we can approximate this as Nf.
  • If we help by taking far-future-targeting actions, even if we're in a sim, our actions can timelessly affect what happens in the basement, so we can have an impact regardless of whether we're in a sim or not. The future contains N people with i intensity factor, and there's e entropy on actions that try to do far-future stuff relative to short-term stuff. The expected value is Nie.

The ratio of long-term helping to short-term helping is Nie/(Nf) = ie/f, exactly the same as before. Hence, the uncertainty about whether the near- or far-future dominates persists.

I've tried these calculations with a few other tweaks, and something close to ie/f continues to pop out.

Now, this point is again of the "one relatively strong argument" variety, so I'm not claiming this particular elaboration is definitive. But it illustrates the types of ways that far-future-dominance arguments could be neglecting certain factors.

Note also that even if you think ie/f >> 1, it's still less than the 10^30 or whatever factor a naive far-future-dominance perspective might assume. Also, to be clear, I'm ignoring flow-through effects of short-term helping on the far future and just talking about the intrinsic value of the direct targets of our actions.

Comment author: Pablo_Stafforini 04 October 2016 11:26:12AM *  0 points [-]

it's much easier to affect the near future than the far future, so let e represent the amount of extra "entropy" that our actions face if they target the far future. For example, e = 10^-6 says there's a factor-of-a-million discount for how likely our actions are to actually make the difference we intend for the far future vs. if we had acted to affect the near-term.

In the past, when I expressed worries about the difficulties associated to far-future meme-spreading, which you favor as an alternative to extinction-risk reduction, you said you thought there was a significant chance of a singleton-dominated future. Such a singleton, you argued, would provide the necessary causal stability for targeted meme-spreading to successfully influence our distant descendants. But now you seem to be implying that, other things equal, far-future meme-spreading is several orders of magnitude less likely to succeed than short-term interventions (including interventions aimed at reducing near-term risk of extinction, which plausibly represents a significant fraction of total extinction risk). I find these two views hard to reconcile.

Comment author: gwern 08 March 2016 05:40:22PM 2 points [-]
Comment author: Pablo_Stafforini 08 March 2016 07:26:12PM 0 points [-]

Ah, I thought I had searched Libgen but it seems I didn't. Thanks!

Comment author: Pablo_Stafforini 08 March 2016 04:58:51PM 0 points [-]

Landes, Joan B., The Public and the Private Sphere: A Feminist Reconsideration, in Joan B. Landes (ed.), Feminism, the Public and the Private, New York: Oxford University Press, 1998, ch. 5.

Comment author: tog 18 August 2015 05:12:20AM 1 point [-]

I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:

http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html

http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf

But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.

Comment author: Pablo_Stafforini 18 August 2015 06:43:43PM *  1 point [-]

On IQ, I strongly recommend Ian Deary's Intelligence: A Short Introduction (link to shared file in my Google Drive).

Comment author: Pablo_Stafforini 03 May 2015 04:51:35PM 15 points [-]

A prima facie argument in favour of the efficacy of prayer is […] to be drawn from the very general use of it. The greater part of mankind, during all the historic ages, has been accustomed to pray for temporal advantages. How vain, it may be urged, must be the reasoning that ventures to oppose this mighty consensus of belief! Not so. The argument of universality either proves too much, or else it is suicidal. It either compels us to admit that the prayers of Pagans, of Fetish worshippers and of Buddhists who turn praying wheels, are recompensed in the same way as those of orthodox believers; or else the general consensus proves that it has no better foundation than the universal tendency of man to gross credulity.

Francis Galton, ‘Statistical Inquiries into the Efficacy of Prayer’, Fortnightly Review, vol. 12, no. 68 (August, 1872), pp. 125–135

View more: Next