You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Jesper_Ostman comments on A question about Eliezer - Less Wrong Discussion

33 Post author: perpetualpeace1 19 April 2012 05:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (158)

You are viewing a single comment's thread. Show more comments above.

Comment author: semianonymous 20 April 2012 04:58:14AM *  4 points [-]

Threads like that make me want to apply Bayes theorem to something.

You start with probability 0.03 that Eliezer is sociopath - the baseline. Then you do Bayesian updates on answers to questions like: Does he imagine grandiose importance to him or is he generally modest/in line with actual accomplishments? Does he have grand plans out of the line with his qualifications and prior accomplishments, or are the plans grandiose? Is he talking people into giving him money as source of income? Is he known to do very expensive altruistic stuff that is larger than self interested payoff or not? Did he claim to be an ideally moral being? And so on. You do updates based on the likehood of such for sociopaths and normal people. Now, I'm not saying he is something, all I am saying is that I can't help it but do such updates - first via fast pattern matching by the neural network, then if I find the issue significant enough, explicitly with a calculator if i want to doublecheck.

edit: I think it will be better to change the wording here as different people understand that word differently. Let's say we are evaluating whenever the utility function includes other people to any significant extent, in presence of communication noise and misunderstandings. Considering that some people are prone to being pascal wagered and so the utility function that doesn't include other people leads to attempts to pascal-wager others, i.e. grandiose plans. On the AI work being charitable, I don't believe it, to be honest. One has to study and get into Google (or the like) if one wants the best shot at influencing morality of future AI. I think that's the direction into which everyone genuinely interested in saving the mankind and genuinely worried about the AI has gravitated. If one wants to make impact by talking - one needs to first gain some status among the cool guys, and that means making some really impressive working accomplishments.

Comment author: Jesper_Ostman 20 April 2012 06:05:46PM 7 points [-]

It seems you are talking about high-functioning psychopaths, rather than psychopaths according to the diagnostic DSM-IV criteria. Thus the prior should be different from 0.03. Assuming a high-functioning psychopath is necessarily a psychopath then it seems it should be far lower than 0.03, at least from looking at the criteria:

A) There is a pervasive pattern of disregard for and violation of the rights of others occurring since age 15 years, as >indicated by three or more of the following: failure to conform to social norms with respect to lawful behaviors as indicated by repeatedly performing acts that are >grounds for arrest; deception, as indicated by repeatedly lying, use of aliases, or conning others for personal profit or pleasure; impulsiveness or failure to plan ahead; irritability and aggressiveness, as indicated by repeated physical fights or assaults; reckless disregard for safety of self or others; consistent irresponsibility, as indicated by repeated failure to sustain consistent work behavior or honor financial >obligations; lack of remorse, as indicated by being indifferent to or rationalizing having hurt, mistreated, or stolen from another; B) The individual is at least age 18 years. C) There is evidence of conduct disorder with onset before age 15 years. D) The occurrence of antisocial behavior is not exclusively during the course of schizophrenia or a manic episode."

Comment author: semianonymous 20 April 2012 09:07:27PM *  0 points [-]

He is a high IQ individual, though. That is rare on its own. There are smart people who pretty much maximize their personal utility only.