Bugmaster comments on So You Want to Save the World - Less Wrong

41 Post author: lukeprog 01 January 2012 07:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread.

Comment author: Bugmaster 05 January 2012 10:01:55PM 4 points [-]

No, the world must be saved by mathematicians, computer scientists, and philosophers. This is because the creation of machine superintelligence this century will determine the future of our planet...

You sound awfully certain of that, especially considering that, as you say later, the problems are poorly defined, the nature of the problem space is unclear, and the solutions are unknown.

If I were a brilliant scientist, engineer, or mathematician (which I'm not, sadly), why should I invest my efforts into AI research, when I could be working on more immediate and well-defined goals ? There are quite a few of them, including but not limited to:

  • Prevention of, or compensation for, anthropogenic global climate change
  • Avoiding economic collapse
  • Developing a way to generate energy cheaply and sustainably
  • Reducing and eliminating famine and poverty in all nations

True, developing a quasi-godlike friendly AI would probably solve all of these problems in one hit, but that might be a bit of a long shot, whereas these problems and many others need to be solved today.

Comment author: pleeppleep 18 February 2012 03:48:13AM 0 points [-]

sorry, its been a while since everyone stopped responding to this comment, but these goals wouldnt even begin to cover the number of problems that would be solved if our rough estimates of the capabilities of FAI are correct. You could easily fit another 10 issues to this selection and still be nowhere near a truly just world. not to mention the fact that each goal you add on makes solving such problems less likely due to the amount of social resistance you would encounter. and suppose humans truly are incapable of solving some of these issues under present conditions. this is not at all unlikely and an AI would have a much better shot at finding solutions. The added delay and greater risk may make pursuing FAI less rewarding than any one or even possibly three of these problems, but considering the sheer number of problems human beings face that could be solved through the Singularity if all goes well would lead me to believe it is far more worthwhile than any of these issues.

Comment author: TheOtherDave 05 January 2012 10:18:38PM 3 points [-]

Well, I'm unlikely to solve those problems today regardless. Either way, we're talking about estimated value calculations about the future made under uncertainty.

Comment author: Bugmaster 07 January 2012 02:47:00AM 1 point [-]

Fair enough, but all of the examples I'd listed are reasonably well-defined problems, with reasonably well-outlined problem spaces, whose solutions appear to be, if not within reach, then at least feasible given our current level of technology. If you contrast this with the nebulous problem of FAI as lukeprog outlined it, would you not conclude that the probability of solving these less ambitious problems is much higher ? If so, then the increased probability could compensate for the relatively lower utility (even though, in absolute terms, nothing beats having your own Friendly pocket genie).

Comment author: TheOtherDave 07 January 2012 03:11:33AM 0 points [-]

would you not conclude that the probability of solving these less ambitious problems is much higher ?

Honestly, the error bars on all of these expected-value calculations are so wide for me that they pretty much overlap. Especially when I consider that building a run-of-the-mill marginally-superhuman non-quasi-godlike AI significantly changes my expected value of all kinds of research projects, and that cheap plentiful energy changes my expected value of AI projects, and etc., so half of them include one another as factors anyway.

So, really? I haven't a clue.

Comment author: Bugmaster 07 January 2012 03:22:12AM 1 point [-]

Fair enough; I guess my error bars are just a lot narrower than yours. It's possible I'm being too optimistic about them.