All of Ludwig's Comments + Replies

Ludwig10

Interesting, I see what you mean reagrding probability and it makes sense. I guess perhaps, what is missing is that when it comes to questions of peoples lives we may have a stronger imperative to be more risk-averse. 

I completely agree with you about effect size. I guess what I would say is that that given my point 1 from earlier  about the variety of X-risks coordination would contirbute in solving then the effect size will always be greater. If we want to maximise utility its the best chance we have. The added bonuses are that it is comparativ... (read more)

1Jay Bailey
Being risk-averse around people's lives is only a good strategy when you're trading off against something else that isn't human lives. If you have the choice to save 400 lives with certainty, or a 90% chance to save 500 lives, choosing the former is essentially condemning 50 people to death. At that point, you're just behaving suboptimally. Being risk-averse works if you're trading off other things. E.g, if you could release a new car now that you're almost certain is safe, you might be risk-averse and call for more tests. As long as people won't die from you delaying this car, being risk-averse is a reasonable strategy here. Given your Point 1 from earlier, there is no reason to expect the effect size will always be greater. If the effect on reducing X-risks from co-ordination becomes small enough, or the risk of a particular X-risk becomes large enough, this changes the equation. If you believe, like many in this forum do, that AI represents the lion's share of X-risk, focusing on AI directly is probably more effective. If you believe that x-risk is diversified, that there's some chance from AI, some from pandemics, some from nuclear war, some from climate change, etc. then co-ordination makes more sense. Co-ordination has a small effect on all x-risks, direct work has a larger effect on a single x-risk. The point I'm trying to make here is this. There are perfectly reasonable states of the world where "Improve co-ordination" is the best action to take to reduce x-risk. There are also perfectly reasonable states of the world where "Work directly on <Risk A>" is the best action to take to reduce x-risk. You won't be able to find out which is which if you believe one is "always" the case. What I would suggest is to ask "What would cause me to change my mind and believe improving co-ordination is NOT the best way to work on x-risk", and then seek out whether those things are true or not. If you don't believe they are, great, that's fine. That said, it wouldn't b
Ludwig10

Intersting, yes I am interested in coordination problems. Let me follow this framework, to make a better case. There are three considerations I would like to point out.

  1. The utility in adressing coordination problems is that they affect almost all X-risk scenarios.(Nuclear war, bioterror, pandemics, climate change and AGI) Working on coordintion problems reduces not only current suffering but also X-Risk of both AGI and non AGI kinds. 
  2. The difference between a 10% chance of something happening that may be an X-Risk in 100 years is not 10 times less then
... (read more)
1Jay Bailey
I agree with you on the first point completely. As for Point 2, you can absolutely compare a certainty and a probability. If I offered you a certainty of $10, or a 10% chance of $1,000,000, would you take the $10 because you can't compare certainties and probabilities, and I'm only ever going to offer you the deal once? That then brings me to question 3. The button I would press would be the one that reduces total X-risk the most. If both buttons reduced X-risk by 1%, I would press the 100% one. If the 100% button reduced X-risk by 0.1%, and the 10% one reduced X-risk by 10%, I would pick the second one, for an expected value of 1% X-risk reduction. You have to take the effect size into account. We can disagree on what the effect sizes are, but you still need to consider them.
Ludwig10

I understand and appreciate your discussion.  I wonder if perhaps we could consider is that it may be more morally imperative to work on AI safety for the hugely impactful problems AI is contributing right now, if we assume that in finding solutions to these current and near-term AI problems we would also be lowering the risk of AGI X-risk (albiet indirectly). 

Given that the likelyhood for narrow AI risk being 1 and the likelyhood of AGI in the next 10 years being (as in your example) <0.1 -  It seems obvious we should focus on addressing... (read more)

2Jay Bailey
If you consider these coordination problems to be equivalent to x-risk level consequences, then it makes sense to work on aligning narrow AI. For instance, if you think there's a 10% chance of AGI x-risk this century, and current problems are 10% as bad as human extinction. After all, working on aligning narrow AI is probably more tractable than working on aligning the hypothetical AGI systems of the future. You are also right that aligning narrow AI may help align AGI in the future - it is, at the very least, unlikely to hurt. Personally, I don't think the current problems are anything close to "10% as bad as human extinction", but you may disagree with me on this. I'm not very knowledgable about politics, which is the field I would look into to try and identify the harms caused by our current degradation of coordination, so I'm not going to try to convince you of anything in that field - more trying to provide a framework with which to look at potential problems. So, basically I would look at it as - which is higher? The chance of human extinction from AGI times the consequences? Or the x-risk reduction from aligning narrow AI, plus the positive utility of solving our current problems today? I believe the former, so I think AGI is more important. If you believe the latter, aligning narrow AI is more important.
Ludwig70

Why should we throw immense resources on AGI x-risk when the world faces enormous issues with narrow AI right now? (eg. destabalised democracy/mental health crisis/worsening inequality)

Is it simply a matter of how imminent you think AGI is? Surely the opportunity cost is enormous given the money and brainpower we are spending on AGI something many dont even think is possible versus something that is happening right now.

3Perhaps
In addition to what Jay Bailey said, the benefits of an aligned AGI are incredibly high, and if we successfully solved the alignment problem we could easily solve pretty much any other problem in the world(assuming you believe the "intelligence and nanotech can solve anything" argument). The danger of AGI is high, but the payout is also very large.
7Jay Bailey
The standard answer here is that all humans dying is much, much worse than anything happening with narrow AI. Not to say those problems are trivial, but humanity's extinction is an entirely different level of bad, so that's what we should be focusing on. This is even more true if you care about future generations, since human extinction is not just 7 billion dead, but the loss of all generations who could have come after. I personally believe this argument holds even if we ascribe a relatively low probability to AGI in the relatively near future. E.g, if you think there's a 10% chance of AGI in the next 10-20 years, it still seems reasonable to prioritise AGI safety now. If you think AGI isn't possible at all, naturally we don't need to worry about AI safety. But I find that pretty unconvincing - humanity has made a lot of progress very quickly in the field of AI capabilities, and it shows no signs of slowing down, and there's no reason why such a machine could not exist in principle.