This is false. Though currently there are situations that may come up that will prompt it to give up control to the human driver, and there are some situations (such as high reflectivity / packed snow) that they can't handle yet.
4hairyfigment
To focus on one problem with this, you write:
Eliezer has read GEB and praised it above the mountains (literally). So a charitable reader of him and his colleagues might suppose that they know the point about pattern recognition, but do not see the connection that you find obvious. And in fact I don't know what you're responding to, or what you think your second quoted sentence has to do with the first, or what practical conclusion you draw from it through what argument. Perhaps you could spell it out in detail for us mortals?
6wedrifid
You don't understand what that term means.
5A1987dM
It's not nonsensical; it means “would you rather have a bowl of ice cream or a chair?” Of course the answer is “it depends”, but no-one ever claimed that U(x + a bowl of ice cream) − U(x) doesn't depend on x.
None of that fog obscures the basic fact that the number of feminist female bank tellers cannot possibly be greater than the number of female bank tellers. The world is complex, but that does not mean that there are no simple truths about it. This is one of them.
People have thought up all manner of ways of exonerating people from the conjunction fallacy, but if you go back to Eliezer's two posts about it, you will find some details of the experiments that have been conducted. His conclusion:
The conjunction error is an error, and people do make it.
1DaFranker
Every "context" can be described as a set of facts and parameters, AKA more data. Perfect data on the context means perfect information. Perfect information means perfect choice and perfect predictions. Sure, it might seem to you like the logical arguments expressed are "too basic to apply to the real world", but a utility function is really only ever "wrong" when it fails to apply the correct utility to the correct element ("sorting out your priorities"), whether that's by improper design, lack of self-awareness, missing information or some other hypothetical reason.
For every "no but theory doesn't apply to the real world" or "theory and practice are different" argument, there is always an explanation for the proposed difference between theory and reality, and this explanation can be included in the theory. The point isn't to throw out reality and use our own virtual-theoretical world. It's to update our model (the theory) in the most sane and rational way, over and over again (constantly and continuously) so that we get better.
Likewise, maximizing one's own utility function is not the reduce-oneself-to-machine-worshipper-of-the-machine-god that you seem to believe. I have emotions, I get angry, I get irritated (e.g. at your response*), I am happy, etc. Yet it appears that for several years, in hindsight, I've been maximizing my utility function without knowing that that's how it's called (and learning the terminology and more correct/formal ways of talking about it once I started reading LessWrong).
Your "utility function" is not one simple formula that you use to plug in values to variables, compute, and then call it a decision. The utility function of a person is the entire, general completeness of what that person wants and desires and values. If I tried to write down for you my own utility function, it would be both utterly incomprehensible and probably ridiculously ugly. That's assuming I'd even be capable of writing it all down - limited self-awareness,
Where'd that come from? Are you an artists / anthropologist?
2DaFranker
Nice try. You've almost succeeded at summarizing practically all the relevant arguments against the SI initiative that have already been refuted. Notice the last part there that says "have already been refuted".
Each of the assertions you make are ones that members of the SI have already adressed and refuted. I'd take the time to decompose your post into a list of assertions and give you links to the particular articles and posts where those arguments were taken down, but I believe this would be an unwise use of my time.
It would, at any rate, be much simpler to tell you to at least read the articles on the Facing the Singularity site, which are a good vulgarized introduction to the topic. In particular, the point of timescale overestimates is clearly adressed there, as is that of the "complexity" of human intelligence.
I'd like to also indicate that you are falsely overcomplexifying the activity of the human brain. There are no such things as "numerous small regions" that "run programs" or "communicate". These are interpretations of patterns within the natural events, which are simply, first and foremost, a huge collection of neurons sending signals to other neurons, each with its own unique set of links to particular other neurons and a domain of nearby neurons to which it could potentially link itself. This is no different from the old core sequence article here on LessWrong where Eliezer talks about how reality doesn't actually follow the rules of aerodynamics to move air around a plane - it's merely interactions of countless tiny [bits of something] on a grand scale, with each tiny [bit of something] doing its own thing, and nowhere along the entire process do the formulae we use for aerodynamics get "solved" to decide where one of the [bits of something] must go.
Anyway, I'll cut myself short here - I doubt any more deserves to be said on this. If you are willing to learn and question yourself, and actually want to become a better rationalist and obtain mo