You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MixedNuts comments on A question about Eliezer - Less Wrong Discussion

33 Post author: perpetualpeace1 19 April 2012 05:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (158)

You are viewing a single comment's thread. Show more comments above.

Comment author: semianonymous 20 April 2012 04:58:14AM *  4 points [-]

Threads like that make me want to apply Bayes theorem to something.

You start with probability 0.03 that Eliezer is sociopath - the baseline. Then you do Bayesian updates on answers to questions like: Does he imagine grandiose importance to him or is he generally modest/in line with actual accomplishments? Does he have grand plans out of the line with his qualifications and prior accomplishments, or are the plans grandiose? Is he talking people into giving him money as source of income? Is he known to do very expensive altruistic stuff that is larger than self interested payoff or not? Did he claim to be an ideally moral being? And so on. You do updates based on the likehood of such for sociopaths and normal people. Now, I'm not saying he is something, all I am saying is that I can't help it but do such updates - first via fast pattern matching by the neural network, then if I find the issue significant enough, explicitly with a calculator if i want to doublecheck.

edit: I think it will be better to change the wording here as different people understand that word differently. Let's say we are evaluating whenever the utility function includes other people to any significant extent, in presence of communication noise and misunderstandings. Considering that some people are prone to being pascal wagered and so the utility function that doesn't include other people leads to attempts to pascal-wager others, i.e. grandiose plans. On the AI work being charitable, I don't believe it, to be honest. One has to study and get into Google (or the like) if one wants the best shot at influencing morality of future AI. I think that's the direction into which everyone genuinely interested in saving the mankind and genuinely worried about the AI has gravitated. If one wants to make impact by talking - one needs to first gain some status among the cool guys, and that means making some really impressive working accomplishments.

Comment author: MixedNuts 20 April 2012 12:16:10PM 1 point [-]

Note that Melinda Gates corresponds to the same criteria about as well.

Comment author: semianonymous 20 April 2012 01:34:53PM *  2 points [-]

She did expensive altruistic stuff that was more expensive than expected self interested payoff, though; the actions that are more expensive to fake than the win from faking are a very strong predictor for non-psychopathy; the distinction between psychopath that is genuinely altruistic, and non-psychopath, is that of philosophical zombie vs human.

Comment author: MixedNuts 20 April 2012 01:38:20PM 4 points [-]

Eliezer either picked a much less lucrative career than he could have gotten with the same hours and enjoyment because he wanted to be altruistic, or I'm mistaken about career prospects for good programmers, or he's a dirty rotten conscious liar about his ability to program.

Comment author: TheOtherDave 20 April 2012 03:24:41PM 4 points [-]

Is it clear that he would have gotten the same enjoyment out of a career as a programmer?

Comment author: semianonymous 20 April 2012 02:45:09PM *  6 points [-]

People don't gain ability to program out of empty air... everyone able to program has long list of various working projects that they trained on. In any case, programming is real work, it is annoying, it takes training, it takes education, it slaps your ego on the nose just about every time you hit compile after writing any interesting code. And the newbies are grossly mistaken about their abilities. You can't trust anyone to measure their skills accurately, let alone report them.

Comment author: MixedNuts 20 April 2012 02:54:31PM 1 point [-]

Are you claiming (a non-negligible probability) that Eliezer would be a worse programmer if he'd decided to take up programming instead of AI research (perhaps because he would have worked on boring projects and given up?), or that he isn't competent enough to get hired as a programmer now?