KatjaGrace comments on Superintelligence Reading Group 2: Forecasting AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (109)
Good question. The basic argument is that whatever an AI (or any creature) values, more resources are very likely to be useful for that goal. For instance, if it just wants to calculate whether large numbers are prime or not, it will do this much better if it has more resources to devote to calculation. This is elaborated somewhat in papers by Omohundro and Bostrom.
That is, while exploration and resource acquisition are in our DNA, there is a very strong reason for them to be there, so they are likely to be in the DNA-analog of any successful general goal-seeking creature.