My earlier comment is not to imply that I think "maximization of human happiness" is the most preferred goal.
An easily obvious one, yes. But faulty; "human" is a severely underspecified term.
In fact, I think that putting in place a One True Global Goal would require ultimate knowledge about the nature of being, to which we do not have access currently.
Possibly, the best we can do is come up with plausible global goal that suits us for medium run, while we try to find out more.
That is, after all, what we have always done as human beings.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Why build an AI at all?
That is, why build a self-optimizing process?
Why not build a process that accumulates data and helps us find relationships and answers that we would not have found ourselves? And if we want to use that same process to improve it, why not let us do that ourselves?
Why be locked out of the optimization loop, and then inevitably become subjects of a God, when we can make ourselves a critical component in that loop, and thus 'be' gods?
I find it perplexing why anyone would ever want to build an automatic self-optimizing AI and switch it to "on". No matter how well you planned things out, not matter how sure you are of yourself, by turning the thing on, you are basically relinquishing control over your future to... whatever genie it is that pops out.
Why would anyone want to do that?