You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer_Yudkowsky comments on Do Earths with slower economic growth have a better chance at FAI? - Less Wrong Discussion

30 Post author: Eliezer_Yudkowsky 12 June 2013 07:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (174)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 13 June 2013 07:18:22PM 4 points [-]

Some Facebook discussion here including Carl's opinion:

https://www.facebook.com/yudkowsky/posts/10151665252179228

Comment author: lukeprog 27 September 2013 02:15:27AM 1 point [-]

I'm reposting Carl's Facebook comments to LW, for convenience. Carl's comments were:

Economic growth makes the world more peaceful and cooperative in multiple ways, reducing the temptation to take big risks with dangerous technologies to get ahead, risk of arms races, mistrust stopping international coordination, the chance of a nuclear war brutalizing society, and others. Link 1

Economic growth also makes people care more about long-term problems like global warming, and be more inclusive of and friendly towards foreigners and other neglected groups. Link 2

Then there's the fact that Moore's law is much faster than economic growth, and software spending is also growing as a share of the economy. So an overall stagnant economy does not mean stagnant technology.

Plus the model of serial-intensive FAI action you are using to drive the benefits of moving early relies on a lot of extreme predictions relative to the distribution of expert opinion, without a good predictive track record to back them up, and a plausible bias explanation. Otherwise, in the likely event that it is not dispositive, other factors predominate. It's too small relative to all the other things that get affected.

[So] generally I think a uniform worldwide increase in per capita incomes improves global odds of good long-run futures.

Eliezer replied to Carl:

The most powerful mechanisms for this, in your model, are that (a) wealth transmits to international cooperation which improves FAI vs. UFAI somehow, and (b) wealth transmits to concern about global tidiness which you think successfully transmits more to FAI vs. UFAI? Neither of these forces have very much effectual power at all in my visualization - I wouldn't mention them in the same breath as Moore's Law or total funding for AI. They're both double-fragile mechanisms.

Comment author: John_Maxwell_IV 29 September 2013 02:12:52AM 0 points [-]

It's worth noting that the relationship between economic growth and the expected quality of global outcomes is not necessarily a linear one. The optimal speed of economic growth may be neither super-slow nor super-fast, but some "just right" value in between that makes peace, cooperation, and long-term thinking commonplace while avoiding technological advancement substantially faster than what we see today.

Comment author: NancyLebovitz 14 June 2013 01:24:32AM 0 points [-]

The possibility of AI being invented to deal with climate change hadn't occurred to me, but now that it's mentioned, it doesn't seem impossible, especially if climate engineering is on the agenda.

Any thoughts about whether climate is a sufficiently hard problem to inspire work on AIs?

Comment author: shminux 14 June 2013 01:43:13AM 0 points [-]

Any thoughts about whether climate is a sufficiently hard problem to inspire work on AIs?

Climate seems far easier. At least it's known what causes climate change, more or less. No one knows what it would take to make an AGI.

Comment author: NancyLebovitz 14 June 2013 02:06:41AM 0 points [-]

I didn't mean work on climate change might specifically be useful for developing an AI, I meant that people might develop AI to work on weather/climate prediction.

Comment author: shminux 14 June 2013 02:32:04AM 1 point [-]

Right, and my reply was that AGI is much harder, so unlikely. Sorry about not being clear.