It seems to me that the difference between your estimate of the time scale and other people's estimates of the time scale is that everyone has different garbage and the computation is GIGO. People aren't disagreeing about facts, they're disagreeing about the weights of fit parameters, which have so little data to work off of the uncertainties dwarf the values.
It seems like the resolution is to get rid of your feeling that you can improve your timescale guess, and to get rid of any paralysis you have on making decisions with imperfect information. That's a necessary part of life.
If AGI is not likely within this century (which also seems unlikely) then working on AGI is probably pointless.
Ideally, Friendly AI should be understood long before AGI is feasible. Like, 200 years ago. It'll be too late when AGI is possible.
This seems fair. But I am unconvinced that the marginal benefit of understanding AGI now is even close to the marginal benefit of spreading and training rationality now. Unless you believe that either FAI research is very non-parallelizable or that AGI will be feasible in the near future, it seems likely that spreading rationality is more important today (even to the instrumental goal of understanding FAI before AGI is feasible). But if you do believe that AGI will be feasible in the near future, and in particular before long-term efforts to produce more rationalists will have a significant effect on the world, then working on FAI directly (or, more likely, funding current research on FAI, or attempting to influence the AI research community, or trying to do research which is likely to lead to understandable AGI, etc.) is urgent.
So I concede that understanding FAI should precede the feasibility of AGI, but without some additional argument about the timescales on which AGI is feasible, the difficulty of parallelizing FAI research, or some unexpected obstruction to spreading rationality, I am not yet convinced that I should work on or fund FAI research or do anything related to AI.
I am very skeptical about causes that engage exclusively in spreading awareness. By directing the efforts of a small proportion of the rationalists we produce towards direct work on FAI, we validate that we are in fact producing people capable of working on the problem, as opposed to merely having an exponentially growing group of people who profess "Yay FAI!".
I am very skeptical about causes that engage exclusively in spreading awareness.
As am I. However, here are some things I believe about the SIAI and FAI:
To the average well-educated person, the efforts of the SIAI are indistinguishable from a particularly emphatic declaration of "Yay FAI!" To the average person who cares strongly about FAI, the performance of the SIAI still does not validate that "we are in fact producing people capable of working on the problem," because there are essentially no standards to judge against, no concrete theoretical results in evidence, and no suggestion that impressive theoretical advances are forthcoming. Saying "the problem is difficulty" is a perfectly fine defense, but it does not give the work being done any more value as validation.
The average intelligent (and even abnormally rational) non-singulatarian has little respect for the work of the SIAI, to the extent that the affiliation of the SIAI with outreach significantly reduces its credibility with the most important audience, and the (even quite vague) affiliation of an individual with SIAI makes it significantly more difficult for that individual to argue credibly about the future of humanity.
It is not at all obvious that FAI is the most urgent technical problem currently in view. For example, pushing better physical understanding of the brain, better algorithmic understanding of cognition, and technology for interfacing with human brains all seem like they could have a much larger effect on the probability of a positive singularity. The real argument for normal humans working on FAI is extremely complicated and uncertain.
I place fairly little value on an exponentially growing group of people interested in FAI, except insofar as they can be converted into an exponentially large group of people who care about the future of humanity and act rationally on that preference. I think there are easier ways to accomplish this goal; and on the flip side I think "merely" having an exponentially large group of rational people who care about humanity is incredibly valuable.
My main concern in the direction you are pointing is the difficulty of effective outreach when the rationality on offer appears to be disconnected from reality (in particular the risk that what you are spreading will almost certainly cease to be "rationality" without some good grounding). I believe working on FAI is a uniquely bad way to overcome this difficulty, because most of the target audience (really smart people whose help is incredibly valuable) considers work on FAI even more disconnected from reality than rationality outreach itself, and because the quality or relevance of work on FAI is essentially impossible for almost anyone not directly involved with that work to assess.
Timescales don't just matter in this way. The more time that goes on the faster an AGI will be to start off since the default hardware will be so much better. That means that if any sort of AGI is restricted primarily by the right insights and not general processing power and it takes a while to hit on those insights, then the chance of bad things happening goes up. So even as the timescale extends outwards that is arguably more of a reason to focus on Friendliness related issues.
That said, I have absolutely no idea how this balances with the timescale issues discussed in your post.
Time scales matter a lot. As opposed to all the genres the HPatMoR characters think they're in, I think I'm living in a 4-X game like Civilization or Master of Orion. You don't work directly on the Science Victory Condition (in this case, FAI) if you can do better growing your research and production capacities to build the FAI faster, but you also don't waste your time building up capacity when it's time to race for the finish line. A median of 2030 is, in my very rough and not at all accurate estimate, annoying close to the border line, although mine is more along the lines of 2050.
I think I'm living in a 4-X game like Civilization or Master of Orion.
Beware the Ludic Fallacy.
There's a great section in "The Black Swan" where Taleb is called in to consult for a casino or something, and he discovers that the biggest losses the casino ever suffered weren't due to bad luck in the blackjack pits or anything like that. One loss involved a lawsuit caused by a stage tiger that got loose and hurt a bunch of people; another loss was caused when an employee failed, for inexplicable reasons, to send a special tax form to the IRS, so the casino was hit with a big penalty.
I think that your marginal impact is also important to consider.
It seems conceivable that you could hedge your bets by arranging to work on one thing, while someone else you have confidence in works on something else. Like, you do AGI and Bob does activism.
Does adding Bob to AGI once you're there help? By how much? If (for some odd reason) it's negligible, then it's probably better to split up.
How dedicated does Bob have to be to AGI to be helpful? If an hour a week of him doing work on it gets 80% of the utility of his all, then it's also probably better for him to be doing other things. And vice versa.
If AGI is likely in the next couple of decades (I am rather skeptical) then long-term activism or outreach are probably pointless. If AGI is not likely within this century (which also seems unlikely) then working on AGI is probably pointless.
Paul, it looks like you ended up deciding to go into FAI research instead of long-term activism or outreach. I'm curious how you reached that decision. Have you explained it anywhere?
Paul, it looks like you ended up deciding to go into FAI research instead of long-term activism or outreach.
I haven't made any commitments, but I'm not currently doing much FAI research.
I would be very interested in reading about your timeline opinions after you play with the Uncertain Future Web App which was created to help clarify some of these timing issues. Also, the software behind the site was open sourced a few months ago so if you are frustrated by that tool it might be possible to improve it based on feedback.
I didn't find the app very helpful in refining my estimate. There are too many particular ingredients (especially: how hard is AI? How well do you have to simulate a brain?) with incredible uncertainty. Not coincidentally, this is the same reason that I can't come up with a good estimate on my own.
In an interview with John Baez, Eliezer responds:
He was in part addressing the tradeoff between environmental work and work on technology related to AGI or other existential risks. In this context I agree with his position.
But more broadly, as a person setting out into the wold and deciding what I should do with each moment, the question about timescales is one of the most important issues bearing on my decision and my uncertainty about it (coupled with the difficulty of acquiring evidence) is almost physically painful.
If AGI is likely in the next couple of decades (I am rather skeptical) then long-term activism or outreach are probably pointless. If AGI is not likely within this century (which also seems unlikely) then working on AGI is probably pointless.
I believe it is quite possible that I am smart enough to have a significant effect on the course of whatever field I participate in. I also believe I could have a significant impact on the number of altruistic rationalists in the world. It seems likely that one of these options is way better than the other, and spending some time figuring out which one (and answering related, more specific questions) seems important. One of the most important ingredients in that calculation is a question of timescales. I don't trust the opinion of anyone involved with the SIAI. I don't trust the opinion of anyone in the mainstream. (In both cases I am happy to update on evidence they provide.) I don't have any good ideas on how to improve my estimate, but it feels like I should be able to.
I encounter relatively smart people giving estimates completely out of line with mine which would radically alter my behavior if I believed them. What argument have I not thought through? What evidence have I not seen? I like to believe that smart, rational people don't disagree too dramatically about questions of fact that they have huge stakes in. General confusion about AI was fine when I had it walled off in a corner of my brain with other abstruse speculation, but now that the question matters to me my uncertainty seems more dire.