All of multifoliaterose's Comments + Replies

However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.

What's your break even point for "bring 100 trillion fantastic lives into being with probability p" vs. "improve the quality of a single malaria patient" and why?

0CarlShulman
It depends on the context (probability distribution over number and locations and types of lives), with various complications I didn't want to get into in a short comment. Here's a different way of phrasing things: if I could trade off probability p1 of increasing the income of everyone alive today (but not providing lasting benefits into the far future) to at least $1,000 per annum with basic Western medicine for control of infectious disease, against probability p2 of a great long-term posthuman future with colonization, I would prefer p2 even if it was many times smaller than p1. Note that those in absolute poverty are a minority of current people, a tiny minority of the people who have lived on Earth so far, their life expectancy is a large fraction of that of the rich, and so forth.

The occasional contrarians who mount fundamental criticism do this with a tacit understanding that they've destroyed their career prospects in the academia and closely connected institutions, and they are safely ignored or laughed off as crackpots by the mainstream. (To give a concrete example, large parts of economics clearly fit this description.)

I don't find this example concrete. I know very little about economics ideology. Can you give more specific examples?

It seems almost certain that nuclear winter is not an existential risk in and of itself but it could precipitate a civilizational collapse from which it's impossible to recover (e.g. because we've already depleted too much of the low hanging natural resource supply). This seems quite unlikely, maybe the chance conditional on nuclear winter is between 1 and 10 percent. Given that governments already consider nuclear war to be a national security threat and that the probability seems much lower than x-risk due to future technologies it seems best to focus on other things. Even if nothing direct can be done about x-risk from future technologies, movement building seems better than nuclear risk reduction.

0torekp
Also, while civilization is on the ropes, humanity could be taken out by a large asteroid, supervolcano, or other natural disaster.

So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to &

... (read more)
0handoflixue
"On the flip side I've found that donating gives the same pleasure that buying something does: a sense of empowerment." Hmmm, useful to know. I may have to experiment with this one. I often end up buying stuff simply because the act of purchasing things makes me feel better, and I can't see any reason a small donation to charity wouldn't produce similar results...

Cue: Non-contingency of my arguments (such that the same argument could be applied to argue for conclusions which I disagree with).

Bob: "We shouldn't do question three this way; you only think so because you're a bad writer". My mouth/brain: "No, we should definitely do question three this way! [because I totally don't want to think I'm a bad writer]"

It's probably generically the case that the likelihood of rationalization increases with the contextual cue of a slight. But one usually isn't aware of this in real time.

I find this comment vague and abstract, do you have examples in mind?

GiveWell itself (it directs multiple dollars to its top charities on the dollar invested, as far as I can see, and powers the growth of an effective philanthropy movement with broader implications).

There's an issue of room for more funding.

Some research in the model of Poverty Action Lab.

What information do we have from Poverty Action Lab that we wouldn't have otherwise? (This is not intended as a rhetorical question; I don't know much about what Poverty Action Lab has done).

A portfolio of somewhat outre endeavours like Paul Romer's Charter Cities

... (read more)

Saying that something is 'obvious' can provide useful information to the listener of the form "If you think about this for a few minutes you'll see why this is true; this stands in contrast with some of the things that I'm talking about today." Or even "though you may not understand why this is true, for experts who are deeply immersed in this theory this part appears to be straightforward."

I personal wish that textbooks more often highlighted the essential points over those theorems that follow from a standard method that the reader is... (read more)

Do you know of anyone who tried and quit?

No, I don't. This thread touches on important issues which warrant fuller discussion; I'll mull them over and might post more detailed thoughts under the discussion board later on.

(People rarely exhibit long-term planning to acquire social status any more than we/they exhibit long-term planning to acquire health. E.g., most unhappily single folk do not systematically practice their social skills unless this is encouraged by their local social environment.)

Is lack of social skills typically the factor that prevents unhappily single folk from finding relationships? Surely this is true in some cases but I would be surprised to learn that it's generic.

I strongly endorse your second and fourth points; thanks for posting this. They're related to Yvain's post Would Your Real Preferences Please Stand Up?.

The only problem here is charity: I do think it may be morally important to be ambitious in helping others, which might even include taking a lucrative career in order to give money to charity. This is especially true if the Singularity memeplex is right and we're living in a desperate time that calls for a desperate effort. See for example Giving What You Can's powerpoint on ethical careers. At some point you need to balance how much good you want to do, with how likely you are to succeed in a career, with how miserable you want to make yourself - and at

... (read more)
5Scott Alexander
I don't know a single example of somebody who chose a career substantially less enjoyable than what they would otherwise have been doing in order to help people in an efficient utilitarian way, full stop. I know juliawise was considering it, but I don't know what happened. Do you know of anyone who tried and quit?

(a) My experience with the sociology of academia has been very much in line with what Lukeprog's friend, Shminux and RolfAndreassen describe. This is the culture that I was coming from in writing my post titled Existential Risk and Public Relations. Retrospectively I realize that the modesty norm is unusually strong in academia and to that extent I was off-base in my criticism.

The modesty norms have some advantages and disadvantages. I think that it's appropriate for even the best people take the view "I'm part of a vast undertaking; if I hadn't gotte... (read more)

9XiXiDu
I agree with this. I probably would have never voiced any skepticism/criticism if most SI/LW folks would be more like Holden Karnofsky, Carl Shulman, Nick Bostrom or cousin_it.

But what's the purported effect size?

I know Bach's music quite well from a listener's perspective though not from a theoretician's perspective. I'd be happy to share some pieces recordings that I've enjoyed / have found accessible.

Your last paragraph is obscure to me and I share your impression that you started to ramble :-).

I wasn't opening up discussion of the book so much as inquiring why you find the fact that you cite interesting.

-1Craig_Heldreth
Fair question, but not an easy one to answer. I signed up for the reading group along with the 2600 Redditors. It was previously posted about here. The book is an entry point to issues of Artificial Intelligence, consciousness, cognitive biases and other subjects which interest me. I enjoy the book every time I read from it, but I believe I am missing something which could be provided in a group reading or a group study. As I stated in the previous thread, I am challenged by the musical references. The last time I read music notation routinely was when I sang in a choir in middle school; many of the Bach references and other music references to terms such as fugue, canon, fifths & thirds, &c are difficult for me to grasp. If one of those 2600 redditors felt moved to build some youtube tutorials with a bouncing ball along and atop the Bach scores illustrating Hofstadter's arguments, then I presume many others besides myself would enjoy seeing them. Have you seen that Feynman video where he says he usually dislikes answering "why" questions? If not that, perhaps that Louis C. K. standup routine where he talks about his daughter asking "why?" It is a discussion prompt but it often does not point to anywhere. I have that feeling now that I am rambling.

Why do you bring this up?

For what it's worth my impression is that while there exist people who have genuinely benefited from the book; a very large majority of the interest expressed in the book is almost purely signaling.

1Craig_Heldreth
It would be easier to discuss the merits (or lack) of the book if you specify something about the book you believe lacks merit. The opinion that the book is overly hyped is a common criticism, but is too vague to be refuted. It was a bestseller. Of course many of those people who bought it are silly.

You didn't address my criticism of the question about provably friendly AI nor my point about the researchers lacking relevant context for thinking about AI risk. Again, the issues that I point to seems to make the researchers' response to the questions about friendliness & existential risk due to AI carry little information

6XiXiDu
I rephrased the question now: Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?
1XiXiDu
That doesn't matter too much. Their interpretation of the question is the interesting part and that they are introduced to the concept is the important part. All of them will be able to make sense of the idea of AI that is non-dangerous. And besides, after they answered the questions I asked them about "friendly AI" and a lot of them are familiar with the idea of making AI's non-dangerous, some even heard about SI. Here is one reply I got: And regarding your other objection, Many seem to think that AI's won't have any drives and that it is actually part of the problem to give them the incentive to do anything that they haven't been explicitly programmed to do. But maybe I am the wrong person with respect to Omohundro's paper. I find the arguments in it unconvincing. And if I think that way, someone who takes risks from AI much more seriously, then I doubt that even if they knew about the paper it would change their opinion. The problem is that the paper tries to base its arguments on assumptions about the nature of hypothetical AGI. But just like AI won't automatically share our complex values it won't value self-protection/improvement or rational economic behavior. In short, the paper is tautological. If your premise is an expected utility-maximizer capable of undergoing explosive recursive self-improvement that tries to take every goal to its logical extreme, whether that is part of the specifications or not, then you already answered your own question and arguing about drives becomes completely useless. You should talk to wallowinmaya who is soon going to start his own interview series.

I find some of your issues with the piece legitimate but stand by my characterization of the most serious existential threat from AI being of the type described in the therein.

The whole of question 3 seems problematic to me.

Concerning parts (a) and (b), I doubt that researchers will know what you have in mind by "provably friendly." For that matter I myself don't know what you have in mind by "probably friendly" despite having read a number of relevant posts on Less Wrong.

Concerning part (c); I doubt that experts are thinking in terms of money needed to possible mitigate AI risks at all; presumably in most cases if they saw this as a high priority issue and tractable issue they would have written about it already.

7Antisuji
Not only that, 3b seems to presuppose that the only dangerous AI is a recursively self-improving one.

To illustrate the fact that the value of goods is determined by their scarcity/abundance relative to demand?

0James_Miller
Yes

I don't see the relevance of your response to my question, care to elaborate?

1James_Miller
Sorry. Fine I think. It happens very quickly unlike later in the semester when I insist that a student trade me her jewelry for a glass of water.

I generally agree with paulfchristiano here. Regarding Q2, Q5 and Q6 I'll note that that aside from Nils Nilsson, the researchers in question do not appear to be familiar with the most serious existential risk from AGI: the one discussed in Omohundro's The Basic AI Drives. Researchers without this background context are unlikely to deliver informative answers on Q2, Q5 and Q6.

2red75
What bothers me in The Basic AI Drives is a complete lack of quantitativeness. Temporal discount rate isn't even mentioned. No analysis of self-improvement/getting-things-done tradeoff. Influence of explicit / implicit utility function dichotomy on self-improvement aren't considered.

I was thinking over a dramatically cheap mosquito zapping laser (putting as much of the complexity into software rather than high precision hardware).

I don't understand this sentence. Is this something that you were contemplating doing personally? The Gates Foundation has already funded such a project.

I can't say I care a whole ton though - it's not my fault the world is naturally a hell-hole.

I agree with the second clause but don't think that it has a great deal to do with the first clause. Most people would upon being confronted by a sabertooth t... (read more)

0Dmytry
The Gates foundation project for mosquito zapping AFAIK started in 2007 and still didn't get anywhere practical despite more than sufficient investment. http://en.wikipedia.org/wiki/Mosquito_laser#Implementations It's not so simple to make something that actually works and is cheap to build. In principle it can be as cheap as a DVD writer, but look, you wont have same effort put into this as DVD writer had. It's not so much about money as about having someone who'll actually work on the device productively. In programming there is this phenomena: good programmers are 5..10x more productive than average. Regarding whose fault the world is, I meant that if it was my fault that other people are in hellhole, I would care more. For extreme example, consider some alien species whom are going extinct because they are extremely nasty to each other, they all are, not a single one is nice. In this case its basically 'their' fault. With regards to the positive feeling-only motivated beings... how do you impose soft limit on joint movement for example upon damage of that joint? Make that being ecstatic happy except when the joint is off limits? I don't think this scales to all types of 'software' restrictions that needs to be imposed for survival. And even if it does, that state must be possible for evolution to arrive at, and must be reasonably stable when the species evolve further. Right now if someone is mutated not to feel pain, that person typically dies young even with all the intervention and help.
1juliawise
The Happiness Hypothesis, I book I recommend, addresses this. A long string of good decisions/rewards will keep us alive, while a single bad decision/punishment will kill us. E.g. if you do eat that mushroom it might kill you, but if you don't you probably won't starve right away and can probably find something else to eat. Thus avoiding pain is a better policy than pursuing rewards. However, we're smart and can decide when to override this instinct. For example, my reptile brain just tells me to save all my resources for me and mine. But my more rational mind tells me how much I have in my bank account and that I can afford to help strangers.

How does the person singled out react?

1James_Miller
I ask if anyone is wearing gold jewelry.

I didn't downvote you but I suspect that the reason for the downvotes is the combination of your claim appearing dubious and the absence of a supporting argument.

0Shmi
Thanks!

once people pass a certain intelligence level

This seems crucial to me; you're really talking about a few percent of the population, right?

Also, I'll note that when (even very smart) people are motivated to believe in the existence of a phenomenon they're apt to attribute causal structure in.correlated data.

For example: It's common wisdom among math teachers that precalculus is important preparation for calculus. Surely taking precalculus has some positive impact on calculus performance but I would guess that this impact is swamped by preexisting variance in mathematical ability/preparation.

Huh? I didn't mean opportunity cost, but simply that successful neuromorphic AI destroys the world. Staging a global catastrophe does have lower expected value than protecting from global catastrophe (with whatever probabilities), but also lower expected value than watching TV.

I was saying that it could be that with more information we would find that

0 < EU(Friendly AI research) < EU(Pushing for relatively safe neuromorphic AI) < EU(Successful construction of a Friendly AI).

even if there's a high chance that relatively safe neuromorphic AI woul... (read more)

4Vladimir_Nesov
I expect it would; even a human whose brain was meddled with to make it more intelligent is probably a very bad idea, unless this modified human builds a modified-human-Friendly-AI (in which case some value drift would probably be worth protection from existential risk) or, even better, a useful FAI theory elicited Oracle AI-style. The crucial question here is the character of FOOMing, how much of initial value is retained.

I believe it won't be "less valuable", but instead would directly cause existential catastrophe, if successful.

I meant in expected value.

As Anna mentioned in one of her Google AGI talks there's the possibility of an AGI being willing to trade with humans to avoid a small probabity of being destroyed by humans (though I concede that it's not at all clear how one would create an enforceable agreement). Also a neuromorphic AI could be not so far from a WBE. Do you think that whole brain emulation would directly cause existential catastrophe?

9Vladimir_Nesov
Huh? I didn't mean opportunity cost, but simply that successful neuromorphic AI destroys the world. Staging a global catastrophe does have lower expected value than protecting from global catastrophe (with whatever probabilities), but also lower expected value than watching TV. Indirectly, but with influence that compresses expected time-to-catastrophe after the tech starts working from decades-centuries to years (decades if WBE tech comes early and only slow or few uploads can be supported initially). It's not all lost at that point, since WBEs could do some FAI research, and would be in a better position to actually implement a FAI and think longer about it, but ease of producing an UFAI would go way up (directly, by physically faster research of AGI, or by experimenting with variations on human brains or optimization processes built out of WBEs). The main thing that distinguishes WBEs is that they are still initially human, still have same values. All other tech breaks values, and giving it power makes humane values lose the world.

Believing problem intractable isn't a step towards solving the problem. It might be correct to downgrade your confidence in a problem being solvable, but isn't in itself a useful thing if the goal remains motivated.

I agree, but it may be appropriate to be more modest in aim (e.g. by pushing for neuromorphic AI with some built-in safety precautions even if achieving this outcome is much less valuable than creating a Friendly AI would be).

4Vladimir_Nesov
I believe it won't be "less valuable", but instead would directly cause existential catastrophe, if successful. Feasibility of solving FAI doesn't enter into this judgment.

Luke: I appreciate your transparency and clear communication regarding SingInst.

The main reason that I remain reluctant to donate to SingInst is that I find your answer (and the answers of other SingInst affiliates who I've talked with) to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblems of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.

My general impression is that the SingInst staff have insufficient exposure to technical... (read more)

lukeprog110

I find your answer... to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblemz of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.

No doubt, a one-paragraph list of sub-problems written in English is "unsatisfactory." That's why we would "really like to write up explanations of these problems in all their technical detail."

But it's not true that the problems are too vague to make progress on them. For exampl... (read more)

3Vladimir_Nesov
Believing problem intractable isn't a step towards solving the problem. It might be correct to downgrade your confidence in a problem being solvable, but isn't in itself a useful thing if the goal remains motivated. It mostly serves as an indication of epistemic rationality, if indeed the problem is less tractable than believed, or perhaps it could be a useful strategic consideration. Noticing that the current approach is worse than an alternative (i.e. open problems are harder to communicate than expected, but what's the better alternative that makes it possible to use this piece of better understanding?), or noticing a particular error in present beliefs, is much more useful.

One doesn't need to know that hundreds of people have been influenced to know that Eliezer's writings have had x-risk reduction value; if he's succeeded in getting a handful of people seriously interested in x-risk reduction relative to the counterfactual his work is of high value. Based on my conversations with those who have been so influenced, this last point seems plausible to me. But I agree that the importance of the sequences for x-risk reduction has been overplayed.

The company could generate profit to help fund SingInst and give evidence that the rationality techniques that Vassar, etc. use work in a context with real world feedback. This in turn could give evidence of them being useful in the context of x-risk reduction where empirical feedback is not available.

3curiousepic
Does anyone know if this is the intention?

I misread your earlier comment, sorry for the useless response. I understand where you're coming from now. Holden has written about the possibility of efficient opportunities for donors drying up as the philanthropic sector improves, suggesting that it might be best to help now because the poor people who can be easily helped are around today and will not be in the future. See this mailing list post.

I personally think that even if this is true probably true, the expected value of waiting to give later is higher than the expected value of donating to AMF o... (read more)

2DanielLC
The response wasn't useless. If you misread it, you're probably not alone. Now that I replied to your comment, it's easier to understand. You're probably best off giving now once you understand everything, but you obviously can't just give to the first charity you can find. You have to do enough research that the expected decrease in utility from donating later exactly balances the expected increase from better research.

If you know that you can donate to SCI later, the expected utility of waiting would have to be at least that of donating to it now.

Why? Because you can invest the money and use the investment to donate more later? But donating more now increases the recipients' functionality so that they're able to contribute more to their respective societies in the than they would otherwise be able to in the time between now and later.

2DanielLC
If you can donate it to SCI later, the value of that donation gives a lower bound. You also might be able to donate to something else you find.

It seems very unlikely to me that the expected value of donating to SCI is precisely between 1/2 and 1 times as high as the best alternative.

0DanielLC
If you know that you can donate to SCI later, the expected utility of waiting would have to be at least that of donating to it now. Certain things can cause it to decrease a little. If there's a small probability of finding something better, it would increase it slightly beyond this. If it's slight enough, it's less than twice as good.

I don't understand your question; are you wondering whether you should give through the donation-matching pledge or about whether you should give to AMF or SCI at all?

1DanielLC
No, I'm wondering exactly how much more valuable the donation-matching pledge is than giving directly. If you'd rather do something else with the money than donate it directly (such as hold on to it to see if you can find something better), but you don't think that would be twice as good as giving directly, it would be important to know exactly how useful the donation match is.
6Vladimir_M
In this context, be sure to check out my comment about Max Stirner too. He was a much less prolific writer than Nietzsche, nowhere as good a stylist, and is nowadays far less known than him, but in my opinion, he is nevertheless the more interesting badass German philosopher.

Embryo selection for better scientists. At age 8, Terrence Tao scored 760 on the math SAT, one of only [2?3?] children ever to do this at such an age; he later went on to [have a lot of impact on math]. Studies of similar kids convince researchers that there is a large “aptitude” component to mathematical achievement, even at the high end.7 How rapidly would mathematics or AI progress if we could create hundreds of thousands of Terrence Tao’s?

Though I think agree with the general point that you're trying to make here (that there's a large "aptitude... (read more)

While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

Does this assessment take into account the possibility of intermediate acceleration of human cognition?

1jefftk
It doesn't.

You might point him to the High Impact Careers Network. There's not much on the website right now but the principles have doing in-depth investigation of the prospects for doing good in various careers and might well be inclined to share draft materials with your friend.

Thanks for the excellent response. I'm familiar with much of the content but you've phrased it especially eloquently.

Agree with most of what you say here.

If technological progress is halted completely this won't be a problem.

No, if technological progress is halted completely then we'll never be able to become transhumans. From a certain perspective this is almost as bad as going extinct.

The question as phrased also emphasizes climate change rather than other issues. In the case of such a nuclear war, there would be many other negative results. India is a major economy at this point and such a war would result in largescale economic problems throughout.

The Robock... (read more)

7JoshuaZ
A halt to technological progress would be temporary. Would you rather have a twenty year halt on new technologies or a massive rush of new technologies that end up destroying everyone? There are a variety of different factors going on here. The immediate deaths are one problem. Subsequent further deaths will also occur among the refugee populations and will spread disease and the like. The resulting panic will also create economic damage. Most of the people who would starve due directly to the climate change are people in areas like sub-Saharan Africa who don't have that major a role in the world economy. Their deaths would have a comparatively small impact on the world-wide economy. Bostrom has mentioned doing this sort of thing in detail before I think, but if he has done it I haven't seen the result. There are a variety of different factors that would go in. One obvious thing is that even as we have a fair bit of oil and other fossil fuels left, they are in much harder to reach locations. They are generally deeper in the ground, or farther out to sea, or simply harder to extract. So looking at the total reserves will underestimate the total problem. One related issue that would really need to be examined in detail would be the issue of metals. We've mined a large part of the world's metal reserves. But for some of those, this is actually a good thing if a collapse occurs. Aluminum for example is very hard to extract from ores (it was at one point in the 19th century more expensive than gold). But although the technology to extract aluminum from ores is difficult, the technology to process pre-existing aluminum is much easier and we've helpfully left large quantities of it just lying around. I agree that nuclear war is one of the more likely scenarios. The asteroid possibility is also much less of a worry now than it was a few years ago since WISE is now tracking most large near Earth asteroids and it looks likely that none are in really bad orbits for a few ye

The lack of ICBM capacity for either side makes nuclear weapons in the hands of Pakistan and India effective as MAD deterrence due to the simple fact that any use of such weapons is likely to be nearly as destructive to their own side as it would to the enemy.

Can you substantiate this claim?

3Logos01
... that non-ICBM nuclear weapons would be 'nearly as destructive' to the user as to the enemy, of geographically adjacent nations? India has roughly 80-100 weapons. India has been focusing on low-yield devices (with most apparently falling in the half-kiloton range.) Given how nuclear fallout works and the like... India and Pakistan launching their arsenals at one another would result in a large contaminated area affecting both nations. Now, if you're talking about the effectiveness of MAD deterrence ... if MAD was ineffective the world would have glowed in the dark after the Cuban Missile Crisis.
Load More