Haven't read your book so not sure if you have already answered this.
what is your assessment of miri's current opinion that increasing the global economic growth rate is a source of existential risk?
How much risk is increased for what increase in growth?
Are there safe paths? (Maybe catch up growth in india and china is safe??)
Greater economic growth means more money for AI research from companies and governments and if you think that AI will probably go wrong then this is a source of trouble. But there are benefits as well including increased charitable contributions for organizations that reduce existential risk and better educational systems in India and China which might produce people who end up helping MIRI. Overall, I'm not sure how this nets out.
Catch up growth is not necessarily safe because it will increase the demand for products that use AI and so increase the amou...
If you want people to ask you stuff reply to this post with a comment to that effect.
More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.
If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.