Even with the discussion section, there are ideas or questions too short or inchoate to be worth a post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
When you build automated systems capable of moving faster and stronger than humans can keep up with, I think you just have to bite the bullet and accept that you have to get it right. The idea of building such a system and then having it wait for human feedback, while emotionally tempting, just doesn't work.
If you build an automatic steering system for a car that travels 250 mph, you either trust it or you don't, but you certainly don't let humans anywhere near the steering wheel at that speed.
Which is to say that while I sympathize with you here, I'm not at all convinced that the distinction you're highlighting actually makes all that much difference, unless we impose the artificial constraint that the environment doesn't get changed more quickly than a typical human can assimilate completely enough to provide meaningful feedback on.
I mean, without that constraint, a powerful enough environment-changer simply won't receive meaningful feedback, no matter how willing it might be to take it if offered, any more than the 250-mph artificial car driver can get meaningful feedback from its human passenger.
And while one could add such a constraint, I'm not sure I want to die of old age while an agent capable of making me immortal waits for humanity to collectively and informedly say "yeah, OK, we're cool with that."
(ETA: Hm. On further consideration, my last paragraph is bogus. Pretty much everyone would be OK with letting everybody live until the decision gets made; it's not a make-immortal vs. let-die choice. That said, there probably are things that have this sort of all-or-nothing aspect to them; I picked a poor example but I think my point still holds.)