Filter This month
Comment author: Furcas 24 September 2016 03:39:19PM 17 points [-]

Donated $500!

Comment author: TheAncientGeek 18 October 2016 01:04:49PM *  3 points [-]

I am not taking charity to be a central example of ethics.

Charity, societal improvement,etc are not centrally ethical, because the dimension of obligation is missing. It is obligatory to refrain from murder, but supererogatory to give to charity. Charity is not completely divorced from ethics, because gaining better outcomes is the obvious flipside of avoiding worse outcomes, but it does not have every component that which is centrally ethical.

Not all value is morally relevant. Some preferences can be satisfied without impacting anybody else, preferences for flavours of ice cream being the classic example, and these are morally irrelevant. On the other had, my preference for loud music is likely to impinge on my neighbour's preference for a good nights sleep: those preferences have a potential for conflict.

Charity and altrusim are part of ethics, but not central to ethics. A peaceful and prosperous society is in a position to consider how best to allocate its spare resources (and utiliariansim is helpful here, without being a full theory of ethics), but peace and prosperity are themselves the outcome a functioning ethics, not things that can be taken for granted. Someone who treats charity as the outstanding issue in ethics is, as it were, looking at the visible 10% of the iceberg while ignoring the 90% that supports it.

If you mean conflict between individuals' own values,

I mean destructive conflict.

Consider two stone age tribes. When a hunter of tribe A returns with a deer, everyone falls on it, trying to grab as much as possible, and end up fighting and killing each other. When the same thing happens in tribe b, they apportion the kill in an orderly fashion according to a predefined rule. All other things being equal, tribe B will do better than tribe A: they are in possession of a useful piece of social technology.

Comment author: Zack_M_Davis 04 October 2016 06:55:36PM 3 points [-]

Again, people sometimes use idiomatic English to describe subjective states of high confidence that do not literally correspond to probabilities greater than 0.999! (Why that specific threshold, anyway?)

You know, I take it back; I actually can't see how this might be confusing.

Comment author: Gunnar_Zarncke 14 October 2016 10:56:15PM 2 points [-]

I'm not sure this has the best visibility here in Main. I just noted it right now because I haven't looked in Main for ages. And it wasn't featured in discussions, or was it?

Comment author: hairyfigment 14 October 2016 10:37:41AM 2 points [-]

Clearly I should have asked about actions rather than people. But the Babyeaters were not ignorant that they were causing great pain and emotional distress. They may not have known how long it continued, but none of the human characters IIRC suggested this information might change their minds. Because those aliens had a genetic tendency towards non-human preferences, and the (working) society they built strongly reinforced this.

Comment author: CCC 13 October 2016 01:49:46PM 2 points [-]

"Morals" and "goals" are very different things. I might make it a goal to (say) steal an apple from a shop; this would be an example of an immoral goal. Or I might make a goal to (say) give some money to charity; this would be a moral goal. Or I might make a goal to buy a book; this would (usually) be a goal with little if any moral weight one way or another.

Morality cannot be the same as terminal goals, because a terminal goal can also be immoral, and someone can pursue a terminal goal while knowing it's immoral.

AI morals are not a category error; if an AI deliberately kills someone, then that carries the same moral weight as if a person deliberately kills someone.

Comment author: TheOtherDave 13 October 2016 04:05:12AM 2 points [-]

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

Depends on context.

When I use it, it means something kind of like "what we want to happen." More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.

I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.

I know people who, when they use it, mean something more like "complying with the rules tagged 'moral' in the social structure I'm embedded in." I know people who, when they use it, mean something more like "complying with the rules implicit in the nonsocial structure of the world." In both cases, I try to understand by it what I expect them to mean.

Comment author: ChristianKl 10 October 2016 09:17:40AM 1 point [-]

thereby creating a clearer distinction between religious and secular.

Given that Newton was a person who cared about the religious that would be a bad example. He spent a lot of time with biblical chronology.

You claimed that science wouldn't have been invented at the time without Newton. It's historically no accident that Leibniz discovered calculus independently from Newton. The interest in numerical reasoning was already there.

To get back to the claim, following the scientific method and explicitly writing it down are two different activities. It takes time to move from the implicit to the explicit.

Comment author: So8res 04 October 2016 08:41:49PM 2 points [-]

Huh, thanks for the heads up. If you use an ad-blocker, try pausing that and refreshing. Meanwhile, I'll have someone look into it.

Comment author: Good_Burning_Plastic 29 September 2016 08:03:41AM 2 points [-]

Computing can't harm the environment in any way

Well...

Comment author: Vaniver 27 September 2016 09:38:12PM 2 points [-]

There shouldn't be any conflicts between VoI and Bayesian reasoning; I thought of all of my examples as Bayesian.

From the perspective of avoiding wireheading, an agent should be incentivized to gain information even when this information decreases its (subjective) "value of decision situation". For example, consider a bernoulli 2-armed bandit:

I don't think that example describes the situation you're talking about. Remember that VoI is computed in a forward-looking fashion; when one has a (1, 1) beta distribution over the arm, one thinks it is equally likely that the true propensity of the arm is above .5 and below .5.

The VoI comes into that framework by being the piece that agitates for exploration. If you've pulled arm1 seven times and gotten 4 heads and three tails, and haven't pulled arm2 yet, the expected value of pulling arm1 is higher than pulling arm2 but there's a fairly substantial chance that arm2 has a higher propensity than arm1. Heuristics that say to do something like pull the level with the higher 95th percentile propensity bake in the VoI from pulling arms with lower means but higher variances.


If, from a forward-looking perspective, one does decrease their subjective value of decision situation by gaining information, then one shouldn't gain that information. That is, it's a bad idea to pay for a test if you don't expect the cost of the test to pay for the additional value. (Maybe you'll continue to pull arm1, regardless of the results of pulling arm2, as in the case where arm1 has delivered heads 7 times in a row. Then switching means taking a hit for nothing.)

One thing that's important to remember here is conservation of expected evidence--if I believe now that running an experiment will lead me to believe that arm1 has a propensity of .1 and arm2 has a propensity of .2, then I should already believe those are the propensities of those arms, and so there's no subjective loss of well-being.

Comment author: So8res 26 September 2016 06:39:53PM 2 points [-]

Thanks!

Comment author: gucciCharles 26 September 2016 05:01:11AM 2 points [-]

She gives a pattern of feedback that makes the students practice well? In the sense that she gives positive feedback she functions more as a motivator than as a teacher. Her skill is teaching, it's only happenstance that she teaches music; has she taught shoe polishing or finger painting she would have produced the best shoe polishers and the most skilled finger painters.

Perhaps she doesn't have many complex skills but has strong fundamentals (think Tim Duncan of the NBA Spurs). She might make her students practice the fundamentals which will allow them to do more complex work as they get older.

Finally, she might have knowledge more advanced than her skill. She might not have the hand eye coordination or the processing speed to play sophisticated music but she might know how it's done. Imagine a 5 foot tall jewish guy that loves basketball. He's not gonna make the NBA. It's simply not gonna happen. However, he might understand the game better than many NBA players. Likewise he might be the best basketball coach in the world even though his athleticism (and hence his basketball playing skills) is less than that of NBA players. Likewise the teacher might have had a strong theoretical understanding but not have had the ability to put her theoretical knowledge into practice.

Comment author: So8res 19 October 2016 11:21:01PM 1 point [-]

Fixed, thanks.

Comment author: DanArmak 18 October 2016 07:39:39PM 1 point [-]

Thank you, your point is well taken.

Comment author: TheAncientGeek 18 October 2016 08:33:02AM *  1 point [-]

The rule as usually understood is that fewer relates to discrete quantities, fewer apples, and less to continuous quantities, less milk. It's possibly rather artificial, and noticeably lacking a counterpart in "more".

Comment author: username2 14 October 2016 12:04:43AM *  1 point [-]

Survey assumed a consequentialist utilitarian moral framework. My moral philosophy is neither, so there was no adequate answer.

Comment author: TheAncientGeek 13 October 2016 09:03:47PM 1 point [-]

If I don't use "moral" as a rubber stamp for all and any human value, you don't run into CCCs problem of labeling theft and murder as moral because some people value them. That's the upside. Whats the downside?

Comment author: TheAncientGeek 13 October 2016 01:32:08PM *  1 point [-]

I see morality as fundamentally a way of dealing with conflicts between values/goals, so I cant answer questions posed in terms of "our values", because I don't know whether that means a set of identical values, a set of non-identical but non conflicting values, or a set of conflicting values. One of the implications of that view is that some values/goals are automatically morally irrelevant , since they can be satisfied without potential conflict. Another implication is that my view approximates to "morality is society's rules", but without the dismissive implication..if a society as gone through a process of formulating rules that are effective at reducing conflict, then there is a non-vacuous sense in which that society's morality is its rules. Also AI and alien morality are perfectly feasible, and possibly even necessary.

Comment author: DanArmak 12 October 2016 02:02:14PM *  1 point [-]

I've been told that people use the word "morals" to mean different things. Please answer this poll or add comments to help me understand better.

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

Submitting...

View more: Prev | Next