pleeppleep comments on So You Want to Save the World - Less Wrong

41 Post author: lukeprog 01 January 2012 07:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: Bugmaster 05 January 2012 10:01:55PM 4 points [-]

No, the world must be saved by mathematicians, computer scientists, and philosophers. This is because the creation of machine superintelligence this century will determine the future of our planet...

You sound awfully certain of that, especially considering that, as you say later, the problems are poorly defined, the nature of the problem space is unclear, and the solutions are unknown.

If I were a brilliant scientist, engineer, or mathematician (which I'm not, sadly), why should I invest my efforts into AI research, when I could be working on more immediate and well-defined goals ? There are quite a few of them, including but not limited to:

  • Prevention of, or compensation for, anthropogenic global climate change
  • Avoiding economic collapse
  • Developing a way to generate energy cheaply and sustainably
  • Reducing and eliminating famine and poverty in all nations

True, developing a quasi-godlike friendly AI would probably solve all of these problems in one hit, but that might be a bit of a long shot, whereas these problems and many others need to be solved today.

Comment author: pleeppleep 18 February 2012 03:48:13AM 0 points [-]

sorry, its been a while since everyone stopped responding to this comment, but these goals wouldnt even begin to cover the number of problems that would be solved if our rough estimates of the capabilities of FAI are correct. You could easily fit another 10 issues to this selection and still be nowhere near a truly just world. not to mention the fact that each goal you add on makes solving such problems less likely due to the amount of social resistance you would encounter. and suppose humans truly are incapable of solving some of these issues under present conditions. this is not at all unlikely and an AI would have a much better shot at finding solutions. The added delay and greater risk may make pursuing FAI less rewarding than any one or even possibly three of these problems, but considering the sheer number of problems human beings face that could be solved through the Singularity if all goes well would lead me to believe it is far more worthwhile than any of these issues.