Even with the discussion section, there are ideas or questions too short or inchoate to be worth a post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Sure, and people feel safer driving than riding in an airplane, because driving makes them feel more in control, even though it's actually far more dangerous per mile.
Probably a lot of people would feel more comfortable with a genie that took orders than an AI that was trying to do any of that extrapolating stuff. Until they died, I mean. They'd feel more comfortable up until that point.
Feedback just supplies a form of information. If you disentangle the I-want-to-drive bias and say exactly what you want to do with that information, it'll just come out to the AI observing humans and updating some beliefs based on their behavior, and then it'll turn out that most of that information is obtainable and predictable in advance. There's also a moral component where making a decision is different from predictably making that decision, but that's on an object level rather than metaethical level and just says "There's some things we wouldn't want the AI to do until we actually decide them even if the decision is predictable in advance, because the decision itself is significant and not just the strategy and consequences following from it."
I don't think it's the sense of control that makes people feel safer in a car so much as the fact that they're not miles up in the air.
I'm pretty confident that people would feel more secure with a magical granter of wishes than a command-taking AI (provided that the granter was not an actual genie, which are known to be jerks,) because intelligent beings fall into a class that we are use... (read more)