lessdazed comments on Wanted: backup plans for "seed AI turns out to be easy" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (62)
What do you mean by "competition"? The millions are each trying to maximize their own goals, but usually don't care to suppress others' goals. Cooperation in situations of limited resources rather than expending resources fighting is, I think, universal - in general game theory would apply to smarter and stronger beings as it does to us, with differences being of the type "AIs can merge as a way of cooperating, though humans can't," but not differences of the type "With beings of silicon substrate, cooperation is always inferior to conflict".
I don't think his extrapolated volition would endorse that. I don't think theism could survive extrapolated cognition.
There is an illusion of transparency here because I do not know what that means. Is that a purely destructive thing, is it supposed to combine destruction with "planting" baby AIs like the one that produced it, or what?
I think it would motivate merging. That's what happened with biological cells and tribes of humans.
I don't see why they only trade in your scenario (or would only fight in mine). I don't see how you would program the individual AI to divide the universe into slices and enforce some rules among individuals. This seems like the standard case of giving a singleton a totally alien value set after which it tiles the universe with smiley faces or equivalent.
I don't see how it's directly comparable to creating millions of AIs.
You cannot assume that the volitions of millions of agents will not include something catastrophically bad for you. "Extrapolated Volition" doesn't make people nice.
It only takes one.
We were talking, among other things, about burning the cosmic commons. It's an allusion to Hanson.