The occasional contrarians who mount fundamental criticism do this with a tacit understanding that they've destroyed their career prospects in the academia and closely connected institutions, and they are safely ignored or laughed off as crackpots by the mainstream. (To give a concrete example, large parts of economics clearly fit this description.)
I don't find this example concrete. I know very little about economics ideology. Can you give more specific examples?
It seems almost certain that nuclear winter is not an existential risk in and of itself but it could precipitate a civilizational collapse from which it's impossible to recover (e.g. because we've already depleted too much of the low hanging natural resource supply). This seems quite unlikely, maybe the chance conditional on nuclear winter is between 1 and 10 percent. Given that governments already consider nuclear war to be a national security threat and that the probability seems much lower than x-risk due to future technologies it seems best to focus on other things. Even if nothing direct can be done about x-risk from future technologies, movement building seems better than nuclear risk reduction.
...So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to &
Cue: Non-contingency of my arguments (such that the same argument could be applied to argue for conclusions which I disagree with).
Bob: "We shouldn't do question three this way; you only think so because you're a bad writer". My mouth/brain: "No, we should definitely do question three this way! [because I totally don't want to think I'm a bad writer]"
It's probably generically the case that the likelihood of rationalization increases with the contextual cue of a slight. But one usually isn't aware of this in real time.
I find this comment vague and abstract, do you have examples in mind?
GiveWell itself (it directs multiple dollars to its top charities on the dollar invested, as far as I can see, and powers the growth of an effective philanthropy movement with broader implications).
There's an issue of room for more funding.
Some research in the model of Poverty Action Lab.
What information do we have from Poverty Action Lab that we wouldn't have otherwise? (This is not intended as a rhetorical question; I don't know much about what Poverty Action Lab has done).
...A portfolio of somewhat outre endeavours like Paul Romer's Charter Cities
Saying that something is 'obvious' can provide useful information to the listener of the form "If you think about this for a few minutes you'll see why this is true; this stands in contrast with some of the things that I'm talking about today." Or even "though you may not understand why this is true, for experts who are deeply immersed in this theory this part appears to be straightforward."
I personal wish that textbooks more often highlighted the essential points over those theorems that follow from a standard method that the reader is...
Do you know of anyone who tried and quit?
No, I don't. This thread touches on important issues which warrant fuller discussion; I'll mull them over and might post more detailed thoughts under the discussion board later on.
(People rarely exhibit long-term planning to acquire social status any more than we/they exhibit long-term planning to acquire health. E.g., most unhappily single folk do not systematically practice their social skills unless this is encouraged by their local social environment.)
Is lack of social skills typically the factor that prevents unhappily single folk from finding relationships? Surely this is true in some cases but I would be surprised to learn that it's generic.
I strongly endorse your second and fourth points; thanks for posting this. They're related to Yvain's post Would Your Real Preferences Please Stand Up?.
...The only problem here is charity: I do think it may be morally important to be ambitious in helping others, which might even include taking a lucrative career in order to give money to charity. This is especially true if the Singularity memeplex is right and we're living in a desperate time that calls for a desperate effort. See for example Giving What You Can's powerpoint on ethical careers. At some point you need to balance how much good you want to do, with how likely you are to succeed in a career, with how miserable you want to make yourself - and at
(a) My experience with the sociology of academia has been very much in line with what Lukeprog's friend, Shminux and RolfAndreassen describe. This is the culture that I was coming from in writing my post titled Existential Risk and Public Relations. Retrospectively I realize that the modesty norm is unusually strong in academia and to that extent I was off-base in my criticism.
The modesty norms have some advantages and disadvantages. I think that it's appropriate for even the best people take the view "I'm part of a vast undertaking; if I hadn't gotte...
But what's the purported effect size?
I know Bach's music quite well from a listener's perspective though not from a theoretician's perspective. I'd be happy to share some pieces recordings that I've enjoyed / have found accessible.
Your last paragraph is obscure to me and I share your impression that you started to ramble :-).
I wasn't opening up discussion of the book so much as inquiring why you find the fact that you cite interesting.
Why do you bring this up?
For what it's worth my impression is that while there exist people who have genuinely benefited from the book; a very large majority of the interest expressed in the book is almost purely signaling.
I agree
Why are you asking?
You didn't address my criticism of the question about provably friendly AI nor my point about the researchers lacking relevant context for thinking about AI risk. Again, the issues that I point to seems to make the researchers' response to the questions about friendliness & existential risk due to AI carry little information
I find some of your issues with the piece legitimate but stand by my characterization of the most serious existential threat from AI being of the type described in the therein.
The whole of question 3 seems problematic to me.
Concerning parts (a) and (b), I doubt that researchers will know what you have in mind by "provably friendly." For that matter I myself don't know what you have in mind by "probably friendly" despite having read a number of relevant posts on Less Wrong.
Concerning part (c); I doubt that experts are thinking in terms of money needed to possible mitigate AI risks at all; presumably in most cases if they saw this as a high priority issue and tractable issue they would have written about it already.
To illustrate the fact that the value of goods is determined by their scarcity/abundance relative to demand?
I don't see the relevance of your response to my question, care to elaborate?
I generally agree with paulfchristiano here. Regarding Q2, Q5 and Q6 I'll note that that aside from Nils Nilsson, the researchers in question do not appear to be familiar with the most serious existential risk from AGI: the one discussed in Omohundro's The Basic AI Drives. Researchers without this background context are unlikely to deliver informative answers on Q2, Q5 and Q6.
I was thinking over a dramatically cheap mosquito zapping laser (putting as much of the complexity into software rather than high precision hardware).
I don't understand this sentence. Is this something that you were contemplating doing personally? The Gates Foundation has already funded such a project.
I can't say I care a whole ton though - it's not my fault the world is naturally a hell-hole.
I agree with the second clause but don't think that it has a great deal to do with the first clause. Most people would upon being confronted by a sabertooth t...
How does the person singled out react?
I didn't downvote you but I suspect that the reason for the downvotes is the combination of your claim appearing dubious and the absence of a supporting argument.
once people pass a certain intelligence level
This seems crucial to me; you're really talking about a few percent of the population, right?
Also, I'll note that when (even very smart) people are motivated to believe in the existence of a phenomenon they're apt to attribute causal structure in.correlated data.
For example: It's common wisdom among math teachers that precalculus is important preparation for calculus. Surely taking precalculus has some positive impact on calculus performance but I would guess that this impact is swamped by preexisting variance in mathematical ability/preparation.
Huh? I didn't mean opportunity cost, but simply that successful neuromorphic AI destroys the world. Staging a global catastrophe does have lower expected value than protecting from global catastrophe (with whatever probabilities), but also lower expected value than watching TV.
I was saying that it could be that with more information we would find that
0 < EU(Friendly AI research) < EU(Pushing for relatively safe neuromorphic AI) < EU(Successful construction of a Friendly AI).
even if there's a high chance that relatively safe neuromorphic AI woul...
I believe it won't be "less valuable", but instead would directly cause existential catastrophe, if successful.
I meant in expected value.
As Anna mentioned in one of her Google AGI talks there's the possibility of an AGI being willing to trade with humans to avoid a small probabity of being destroyed by humans (though I concede that it's not at all clear how one would create an enforceable agreement). Also a neuromorphic AI could be not so far from a WBE. Do you think that whole brain emulation would directly cause existential catastrophe?
Believing problem intractable isn't a step towards solving the problem. It might be correct to downgrade your confidence in a problem being solvable, but isn't in itself a useful thing if the goal remains motivated.
I agree, but it may be appropriate to be more modest in aim (e.g. by pushing for neuromorphic AI with some built-in safety precautions even if achieving this outcome is much less valuable than creating a Friendly AI would be).
Luke: I appreciate your transparency and clear communication regarding SingInst.
The main reason that I remain reluctant to donate to SingInst is that I find your answer (and the answers of other SingInst affiliates who I've talked with) to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblems of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.
My general impression is that the SingInst staff have insufficient exposure to technical...
I find your answer... to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblemz of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.
No doubt, a one-paragraph list of sub-problems written in English is "unsatisfactory." That's why we would "really like to write up explanations of these problems in all their technical detail."
But it's not true that the problems are too vague to make progress on them. For exampl...
One doesn't need to know that hundreds of people have been influenced to know that Eliezer's writings have had x-risk reduction value; if he's succeeded in getting a handful of people seriously interested in x-risk reduction relative to the counterfactual his work is of high value. Based on my conversations with those who have been so influenced, this last point seems plausible to me. But I agree that the importance of the sequences for x-risk reduction has been overplayed.
The company could generate profit to help fund SingInst and give evidence that the rationality techniques that Vassar, etc. use work in a context with real world feedback. This in turn could give evidence of them being useful in the context of x-risk reduction where empirical feedback is not available.
I misread your earlier comment, sorry for the useless response. I understand where you're coming from now. Holden has written about the possibility of efficient opportunities for donors drying up as the philanthropic sector improves, suggesting that it might be best to help now because the poor people who can be easily helped are around today and will not be in the future. See this mailing list post.
I personally think that even if this is true probably true, the expected value of waiting to give later is higher than the expected value of donating to AMF o...
If you know that you can donate to SCI later, the expected utility of waiting would have to be at least that of donating to it now.
Why? Because you can invest the money and use the investment to donate more later? But donating more now increases the recipients' functionality so that they're able to contribute more to their respective societies in the than they would otherwise be able to in the time between now and later.
It seems very unlikely to me that the expected value of donating to SCI is precisely between 1/2 and 1 times as high as the best alternative.
I don't understand your question; are you wondering whether you should give through the donation-matching pledge or about whether you should give to AMF or SCI at all?
Embryo selection for better scientists. At age 8, Terrence Tao scored 760 on the math SAT, one of only [2?3?] children ever to do this at such an age; he later went on to [have a lot of impact on math]. Studies of similar kids convince researchers that there is a large “aptitude” component to mathematical achievement, even at the high end.7 How rapidly would mathematics or AI progress if we could create hundreds of thousands of Terrence Tao’s?
Though I think agree with the general point that you're trying to make here (that there's a large "aptitude...
Characteristically Burkian.
While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.
Does this assessment take into account the possibility of intermediate acceleration of human cognition?
You might point him to the High Impact Careers Network. There's not much on the website right now but the principles have doing in-depth investigation of the prospects for doing good in various careers and might well be inclined to share draft materials with your friend.
Me too!
Thanks for the excellent response. I'm familiar with much of the content but you've phrased it especially eloquently.
Agree with most of what you say here.
If technological progress is halted completely this won't be a problem.
No, if technological progress is halted completely then we'll never be able to become transhumans. From a certain perspective this is almost as bad as going extinct.
The question as phrased also emphasizes climate change rather than other issues. In the case of such a nuclear war, there would be many other negative results. India is a major economy at this point and such a war would result in largescale economic problems throughout.
The Robock...
The lack of ICBM capacity for either side makes nuclear weapons in the hands of Pakistan and India effective as MAD deterrence due to the simple fact that any use of such weapons is likely to be nearly as destructive to their own side as it would to the enemy.
Can you substantiate this claim?
Thanks to you too!
What's your break even point for "bring 100 trillion fantastic lives into being with probability p" vs. "improve the quality of a single malaria patient" and why?