Mitchell_Porter comments on A question about Eliezer - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (158)
Threads like that make me want to apply Bayes theorem to something.
You start with probability 0.03 that Eliezer is sociopath - the baseline. Then you do Bayesian updates on answers to questions like: Does he imagine grandiose importance to him or is he generally modest/in line with actual accomplishments? Does he have grand plans out of the line with his qualifications and prior accomplishments, or are the plans grandiose? Is he talking people into giving him money as source of income? Is he known to do very expensive altruistic stuff that is larger than self interested payoff or not? Did he claim to be an ideally moral being? And so on. You do updates based on the likehood of such for sociopaths and normal people. Now, I'm not saying he is something, all I am saying is that I can't help it but do such updates - first via fast pattern matching by the neural network, then if I find the issue significant enough, explicitly with a calculator if i want to doublecheck.
edit: I think it will be better to change the wording here as different people understand that word differently. Let's say we are evaluating whenever the utility function includes other people to any significant extent, in presence of communication noise and misunderstandings. Considering that some people are prone to being pascal wagered and so the utility function that doesn't include other people leads to attempts to pascal-wager others, i.e. grandiose plans. On the AI work being charitable, I don't believe it, to be honest. One has to study and get into Google (or the like) if one wants the best shot at influencing morality of future AI. I think that's the direction into which everyone genuinely interested in saving the mankind and genuinely worried about the AI has gravitated. If one wants to make impact by talking - one needs to first gain some status among the cool guys, and that means making some really impressive working accomplishments.
Can you name one person working in AI, commercial or academic, whose career is centered on the issue of AI safety? Whose actual research agenda (and not just what they say in interviews) even acknowledges the fact that artificial intelligence is potentially the end of the human race, just as human intelligence was the end of many other species?