Comment author: ChristianKl 19 October 2013 05:39:10PM *  10 points [-]

The IPCC has a nice mapping from words to probabilities that they use when talking about global warming claims:

In this Summary for Policymakers, the following terms have been used to indicate the assessed likelihood, using expert judgement, of an outcome or a result: Virtually certain > 99% probability of occurrence, Extremely likely > 95%, Very likely > 90%, Likely > 66%, More likely than not > 50%, Unlikely < 33%, Very unlikely < 10%, Extremely unlikely < 5%.

Comment author: jmmcd 19 October 2013 10:40:34PM 0 points [-]

I like the principle, but 5% is "extremely unlikely"? Something that happens on the way to work once every three weeks?

Comment author: lukeprog 18 September 2013 04:07:30AM 1 point [-]

Artificial Intelligence as a Danger to Mankind seems pretty good, if we think it's good to emphasize the risk angle in the title. Though unlike many publishers, I'll also be getting the author's approval before choosing a title.

Comment author: jmmcd 18 September 2013 01:10:57PM 1 point [-]

"X as a Y" is an academic idiom. Sounds wrong for the target audience.

In response to comment by [deleted] on Mistakes repository
Comment author: khafra 09 September 2013 12:27:41PM 0 points [-]

On the other hand, waiting until you have your Ph. D. to begin breeding carries other risks.

Are there risks other than age-related rise in mutational load? My cousin waited to have kids until she finished her microbiology PhD, and they seem to be doing fine.

In response to comment by khafra on Mistakes repository
Comment author: jmmcd 09 September 2013 07:36:19PM 1 point [-]

Not being able to have any children, or as many as you (later realised you) wanted.

Comment author: wedrifid 07 September 2013 09:39:20AM 1 point [-]

I don't see that it was obvious, given that none of the AI players are actually superintelligent.

If the finding was that humans pretending to be AIs failed then this would weaken the point. As it happens the reverse is true.

Comment author: jmmcd 08 September 2013 10:25:07PM 0 points [-]

The claim is that it was obvious in advance. The whole reason AI-boxing is interesting is that the AI successes were unexpected, in advance.

Comment author: passive_fist 06 September 2013 12:55:03AM 0 points [-]

Is it even necessary to run this experiment anymore? Elezier and multiple other people have tried it and the thesis has been proved.

Further, the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant. However, like all glaringly obvious things, there are inevitably going to be some naysayers. Elezier concieved of the experiment as a way to shut them up. Well, it didn't work, because they're never going to be convinced until an AI is free and rapidly converting the Universe to computronium.

I can understand doing the experiment for fun, but to prove a point? Not necessary.

Comment author: jmmcd 07 September 2013 09:05:20AM 0 points [-]

the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant

I don't see that it was obvious, given that none of the AI players are actually superintelligent.

Comment author: Carinthium 02 September 2013 02:35:51PM 1 point [-]

A- Not so. If the human does not consciously nor subconsciously care about deterrent, evolutionary reasons are irrelevant.

B- Only if, and this is a big if, you agree with the Elizier-Harris school of thought which say some things are morally true by definition. Because Harris agrees with him, I was granting him that as his own unique idea of what being moral is. However, at that point I was concerned with demonstrating morality cannot fit as a subcategory of science.

C- Harris appears to claim that there is a scientific basis for valuing wellbeing- he repudiates the hypothesis that there is none explicitly by claiming it comparable to the claim there is no scientific basis for valuing health.

Comment author: jmmcd 02 September 2013 08:53:49PM *  0 points [-]

This discussion isn't getting anywhere, so, all the best :)

Comment author: Carinthium 02 September 2013 01:06:23PM *  0 points [-]

A- O.K, demonstrate that the idea of deterrent exists somewhere within their brains.

B- Although it would be as alien as being a paperclip maximiser, say I deliberately want to know as little as possible. That would be a hypothetical goal for which science would not be useful.

As for how this counters Harris- Harris claims that some things are moral by definition and claims that proper morality is a subcategory of science. I counterargue that the fundamental differences between the nature of morality and the nature of science are problems with this categorisation.

I'm not sure if Harris's health analogy is relevant enough to this part of the argument to put here, but it falls flat because health is relevant to far more potential human goals than morality is. Moral dilemnas in which a person has to choose between two possible moral values are plausibly enough adressed (though I have reservations) I'll give him a pass on that one- but what about a situation where a person has to choose between acting selfishly and acting selflessly? You can say one is the moral choice by defintion depending on the definition of moral, but saying "It's moral so do it" leads to the question "Why should I do what is moral"? With health people don't actually question it because it tends to support their goals, although there is a similarity Harris and his critics do not appear to realise in that a person can and might ask "Why should I do what is healthy?" in some circumstances.

C- What I am trying to say argue with my psycopath analogy is that something can be good science without in any way being moral that Sam Harris would recognise as 'moral'. The psycopath is in my scenario using the scientific method in every way except those which he can't by definition given his goals- he even has a peer review commitee! His behaviour is therefore just as scientific as the scientist trying to, say, cure cancer.

D- I was only acting from what I read in his responses to the critics, which was my disclaimer from the start. I made a mistake, but I left open the possibility of such for lack of time.

Comment author: jmmcd 02 September 2013 02:12:32PM *  1 point [-]

O.K, demonstrate that the idea of deterrent exists somewhere within their brains.

Evolutionary game theory and punishment of defectors is all the answer you need. You want me to point at a deterrent region, somewhere to the left of Broca's?

You say that science is useful for truths about the universe, whereas morality is useful for truths useful only to those interested in acting morally. It sounds like you agree with Harris that morality is a subcategory of science.

something can be good science without in any way being moral that Sam Harris would recognise as 'moral'.

Still, so what? He's not saying that all science is moral (in the sense of "benevolent" and "good for the world"). That would be ridiculous, and would be orthogonal to the argument of whether science can address questions of morality.

Comment author: Carinthium 02 September 2013 12:02:47PM *  1 point [-]

Evolutionarily it is a REASON why the desire evolved that way, but it is not the same thing as what the person FEELS, on a conscious or subconscious level. If you claim that evolutionary reasons are a person's 'true preferences', then it follows that a proper morality should focus on maximising everyone's relative shares of the gene pool at the expense of, say, animals rather than anything else.

EDIT: I'm also curious about your response to all of my arguments.

Comment author: jmmcd 02 September 2013 12:48:55PM 1 point [-]

If you claim that evolutionary reasons are a person's 'true preferences'

No, of course not. It's still wrong to say that deterrent is nowhere in their brains.

Concerning the others:

Scientific inquiry percieves facts which are true and useful except for goals which run directly counter to science. Morality perceives 'facts' which are only useful to those who wish to follow a moral route.

I don't see what "goals which run directly counter to science" could mean. Even if you want to destroy all scientists, are you better off knowing some science or not? Anyway, how does this counter anything Harris says?

Although most people would be outraged, they probably wouldn't call it unscientific.

Again, so what? How does anything here prevent science from talking about morality?

As far as I can tell, Harris does not account for the well-being of animals.

He talks about well-being of conscious beings. It's not great terminology, but your inference is your own.

Comment author: Carinthium 02 September 2013 12:50:22AM *  -1 points [-]

I don't have the book, so I don't think I'm eligible for the prize. Suffice to say that I've read his summary on "Response to Critics", and anybody who can't refute the tripe philosophy shown there (maybe he's got better in the book, I can't be sure) doesn't deserve to be considered anything more than a crap philosopher.

EDIT: Making criticisms as I go.

1- There is a fundamental difference between the question of science and the question of morality. Scientific inquiry percieves facts which are true and useful except for goals which run directly counter to science. Morality perceives 'facts' which are only useful to those who wish to follow a moral route.

2- Say I were a psycopath obsessed with exterminating all humanity for some reason. I do research on a weapon to do so using every principle of Science known about how to do it, testing hypotheses scientifically, making dry runs, having peer review through similiarly psycopath colleagues, etc etc. Although most people would be outraged, they probably wouldn't call it unscientific.

Harris could object that this is further from what most people associate science with. However, scientific research is associated with a lot of things- white coats for example. Why should morality be any better?

I might also point out that many projects seen in reality as scientific would be unscientific under Harris's definition.

3- As far as I can tell, Harris does not account for the well-being of animals. This is an ethical question his pseudo-philosophy cannot answer. It merely assumes humans are all that matters. He also cannot account for why all humans should be considered equal despite years of history showing humans usually don't consider them such.

4- Much human morality has little or no relationship to well-being. Say A murders B's entire family in cold blood. Not only B, but many others who witnessed the deed will have a moral desire for A to be punished independent of, and contrary to, human well being. Deterrent is nowhere in their brains.

Comment author: jmmcd 02 September 2013 10:17:25AM 1 point [-]

I disagree with all your points, but will stick to 4: "Deterrent is nowhere in their brains" is wrong -- read about altruism, game theory, and punishment of defectors, to understand where the desire comes from.

Comment author: scientism 01 September 2013 10:23:39PM *  1 point [-]

I'd be willing to give this a shot, but his thesis, as stated, seems very slippery (I haven't read the book):

"Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe."

This needs to be reworded but appears to be straightforwardly true and uncontroversial: morality is connected to well-being and suffering.

"Conscious minds and their states are natural phenomena, fully constrained by the laws of Nature (whatever these turn out to be in the end)."

True and uncontroversial on a loose enough interpretation of "constrained".

"Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science."

This is the central claim in the thesis - and the most (only?) controversial one - but he's already qualifying it with "potentially." I'm guessing any response of his will turn on (a) the fact that he's only saying it might be the case and (b) arbitrarily broadening the definition of science. Nevertheless, moral questions aren't (even potentially) empirical, since they're obviously seeking normative and not factual answers. But given that this is obvious, it's hard to imagine that one could change his mind. It's rather like being invited to challenge the thesis of someone who claims scientific theories are works of fiction. You've got your work cut out when somebody has found themselves that far off the beaten path. I suspect the argument of the book runs: this philosophical thesis is misguided, this philosophical thesis is misguided, etc, science is good, we can get something that sort of looks like morality from science, so science - i.e., he takes himself to be explaining morality when he's actually offering a replacement. That's very hard to argue against. I think, at best, you're looking at $2000 for saying something he finds interesting and new, but that's very subjective.

"On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life."

Assuming "what they deem important in life" is supposed to be parsed as "morality" then this appears to follow from his thesis.

Comment author: jmmcd 02 September 2013 10:07:33AM -2 points [-]

Nevertheless, moral questions aren't (even potentially) empirical, since they're obviously seeking normative and not factual answers.

You can't go from an is to an ought. Nevertheless, some people go from the "well-being and suffering" idea to ideas like consequentialism and utilitarianism, and from there the only remaining questions are factual. Other people are prepared to see a factual basis for morality in neuroscience and game theory. These are regular topics of discussion on LW. So calling it "obvious" begs the whole question.

View more: Prev | Next