Comment author: gjm 12 January 2016 06:22:48PM 0 points [-]

Could well be. Does that have anything to do with pattern-matching AI risk to SF, though?

Comment author: RaelwayScot 12 January 2016 07:27:22PM 1 point [-]

Just speaking of weaknesses of the paperclip maximizer though experiment. I've seen this misunderstanding at least 4 out of 10 times that the thought experiment was brought up.

Comment author: gjm 12 January 2016 12:42:08PM 5 points [-]

Major AI risk is science fiction -- that is, it's the kind of thing science-fiction stories get written about, and it isn't something we have experience of yet outside fiction. I don't see how any thought experiment that seriously engages with the issue could not pattern-match to science fiction.

Comment author: RaelwayScot 12 January 2016 05:46:30PM *  0 points [-]

I think many people intuitively distrust the idea that an AI could be intelligent enough to transform matter into paperclips in creative ways, but 'not intelligent enough' to understand its goals in a human and cultural context (i.e. to satisfy the needs of the business owners of the paperclip factory). This is often due to the confusion that the paperclip maximizer would get its goal function from parsing the sentence "make paperclips", rather from a preprogrammed reward function, for example a CNN that is trained to map the number of paperclips in images to a scalar reward.

Comment author: RaelwayScot 06 January 2016 12:49:18PM 0 points [-]

I think the problem here is the way the utility function is chosen. Utilitarianism is essentially a formalization of reward signals in our heads. It is a heuristic way of quantifying what we expect a healthy human (one that can raise up and survive in a typical human environment and has an accurate model of reality) to want. All of this only converges roughly to a common utility because we have evolved to have the same needs which are necessarily pro-life and pro-social (since otherwise our species wouldn't be present today).

Utilitarianism crudely abstracts from the meanings in our heads that we recognize as common goals and assigns numbers to them. We have to be careful what we want to assign numbers to in order to get results that we want in all corner cases. I think, hooking up the utility meter to neurons that detect minor inconveniences is not a smart way of achieving what we collectively want because it might contradict our pro-life and pro-social needs. Only when the inconveniences accumulate individually so that they condense as states of fear/anxiety or noticeably shorten human life, it affects human goals and it makes sense to include them into utility considerations (which, again, are only a crude approximation of what we have evolved to want).

Comment author: RaelwayScot 06 January 2016 11:17:19AM 5 points [-]

Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn't make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)

Comment author: RaelwayScot 03 January 2016 06:46:40PM 1 point [-]

I would love to seem some hard data about correlation between the public interest in science and it's degree of 'cult status' vs. 'open science'.

Comment author: ChristianKl 27 December 2015 04:00:50PM *  1 point [-]

"Only a meme" doesn't negate that it's about something real and that there can be resonable arguments why some memes are better than others.

Comment author: RaelwayScot 27 December 2015 04:17:46PM 0 points [-]

I mean "only a meme" in the sense, that morality is not absolute, but an individual choice. Of course, there can be arguments why some memes are better than others, that happens during the act of individuals convincing each other of their preferences.

Comment author: ChristianKl 27 December 2015 11:25:47AM *  2 points [-]

Basically your argument is: "I can't think of a way to justify morality besides saying that it's my own prefered state, therefore nobody can come up with an argument to justify morality."

Comment author: RaelwayScot 27 December 2015 03:58:06PM 0 points [-]

Is it? I think, the act of convincing other people of your preferred state of the world is exactly what justifying morality is. But that action policy is only a meme, as you said, which is individually chosen based on many criteria (including aesthetics, peer-pressure, consistency).

Comment author: ChristianKl 25 December 2015 10:23:36PM 0 points [-]

What do you mean with should?

Comment author: RaelwayScot 27 December 2015 10:52:51AM *  0 points [-]

Moral philosophy is a huge topic and it's discourse is not dominated by looking at DNA.

Everyone can choose their preferred state then, at least to the extent it is not indoctrinated or biologically determined. It is rational to invest energy into maintaining or achieving this state (because the state presumably provides you with a steady source of reward), which might involve convincing others of your preferred state or prevent them from threating it (e.g. by putting them into jail). There is likely an absolute truth (to the extent physics is consistent from our point of view), but no absolute morale (because it's all memes in an undirected process). Terrorists do nothing wrong from their point of view, but from mine it threatens my preferred state, so I will try to prevent terrorism. We may seem lucky that many preferred states converge to the same goals which are even fairly sustainable, but that is just an evolutionary necessity and perhaps mostly a result of empathy and the will to survive (otherwise our species wouldn't have survived in paleolithic groups of hunters and gatherers).

Comment author: ChristianKl 26 December 2015 11:21:03PM 0 points [-]

Our culture is just as backed into us as our DNA. It's all memes.

Comment author: RaelwayScot 26 December 2015 11:44:15PM 0 points [-]

What are the implications of that on how we decide what is are the right things to do?

Comment author: ChristianKl 26 December 2015 09:51:41PM 0 points [-]

Why do you think biology basis has something to do with the answer?

Comment author: RaelwayScot 26 December 2015 11:18:45PM *  0 points [-]

Because then it would argue from features that are built into us. If we can prove the existence of these features with high certainty, then it could perhaps serve as guidance for our decisions.

On the other hand, it is reasonable that evolution does not create such goals because it is an undirected process. Our actions are unrestricted in this regard, and we must only bear the consequences of the system that our species has come up with. What is good is thus decided by consensus. Still, the values we have converged to are shaped by the way we have evolved to behave (e.g. empathy and pain avoidance).

View more: Prev | Next