The strongest counterargument offered was that a scope-limited AI doesn't stop rogue unfriendly AIs from arising and destroying the world.
I don't quite understand that argument, maybe someone could elaborate.
If there is a rule that says 'optimize X for X seconds' why would an AGI make a difference between 'optimize X' and 'for X seconds'? In other words, why is it assumed that we can succeed to create a paperclip maximizer that cares strongly enough about the design parameters of paperclips to consume the universe (why would it do that as long as it isn't told to do so) but somehow ignores all design parameters that have to do with spatio-temporal scope boundaries or resource limitations?
I see that there is a subset of unfriendly AGI designs that would never halt, or destroy humanity while pursuing their goals. But how large is that subset, how many do actually halt or proceed very slowly?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I don't think that a good analogy. i've never heard of a carnivore who thought meat eating was morally better. Their argument is that meat eating is not so much worse that it becomes an ethical no-no, rather than a ethically neutral lifestyle choice. (Morally level ground).
People can even carry on doing something they think is morally wrong on the excuse of akrasia.
And gay marriage is becoming slowly accepted.
I suspect that you either haven't looked very hard or very long.