Clearly I should have asked about actions rather than people. But the Babyeaters were not ignorant that they were causing great pain and emotional distress. They may not have known how long it continued, but none of the human characters IIRC suggested this information might change their minds. Because those aliens had a genetic tendency towards non-human preferences, and the (working) society they built strongly reinforced this.
What they did was clearly wrong... but, at the same time, they did not know it, and that has relevance.
Consider; you are given a device with a single button. You push the button and a hamburger appears. This is repeatable; every time you push the button, a hamburger appears. To the best of your knowledge, this is the only effect of pushing the button. Pushing the button therefore does not make you an immoral person; pushing the button several times to produce enough hamburgers to feed the hungry would, in fact, be the action of a moral person.
The above paragraph holds even if the device also causes lightning to strike a different person in China every time you press the button. (Although, in this case, creating the device was presumably an immoral act).
So, back to the babyeaters; some of their actions were immoral, but they themselves were not immoral, due to their ignorance.
we want a rigorous, formal explanation of exactly how, when, and why you should or should not stick to your precommitment
Well, if we're designing an AI now, then we have the capability to make a binding precommitment, simply by writing code. And we are still in a position where we can hope for the coin to come down heads. So yes, in that privileged position, we should bind the AI to pay up.
However, to the question as stated, "is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?" I would still answer, "No, you don't achieve your goals/utility by paying up." We're specifically told that the coin has already been flipped. Losing $100 has negative utility, and positive utility isn't on the table.
Alternatively, since it's asking specifically about the decision, I would answer, If you haven't made the decision until after the coin comes down tails, then paying is the wrong decision. Only if you're deciding in advance (when you still hope for heads) can a decision to pay have the best expected value.
Even if deciding in advance, though, it's still not a guaranteed win, but rather a gamble. So I don't see any inconsistency in saying, on the one hand, "You should make a binding precommitment to pay", and on the other hand, "If the coin has already come down tails without a precommitment, you shouldn't pay."
If there were a lottery where the expected value of a ticket was actually positive, and someone comes to you offering to sell you their ticket (at cost price), then it would make sense in advance to buy it, but if you didn't, and then the winners were announced and that ticket didn't win, then buying it no longer makes sense.
Were the Babyeaters immoral before meeting humans?
If not, what would you like to call the thing we actually care about?
"Morals" and "goals" are very different things. I might make it a goal to (say) steal an apple from a shop; this would be an example of an immoral goal. Or I might make a goal to (say) give some money to charity; this would be a moral goal. Or I might make a goal to buy a book; this would (usually) be a goal with little if any moral weight one way or another.
Morality cannot be the same as terminal goals, because a terminal goal can also be immoral, and someone can pursue a terminal goal while knowing it's immoral.
AI morals are not a category error; if an AI deliberately kills someone, then that carries the same moral weight as if a person deliberately kills someone.
I see morality as fundamentally a way of dealing with conflicts between values/goals, so I cant answer questions posed in terms of "our values", because I don't know whether that means a set of identical values, a set of non-identical but non conflicting values, or a set of conflicting values. One of the implications of that view is that some values/goals are automatically morally irrelevant , since they can be satisfied without potential conflict. Another implication is that my view approximates to "morality is society's rules", but without the dismissive implication..if a society as gone through a process of formulating rules that are effective at reducing conflict, then there is a non-vacuous sense in which that society's morality is its rules. Also AI and alien morality are perfectly feasible, and possibly even necessary.
When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?
Depends on context.
When I use it, it means something kind of like "what we want to happen." More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.
I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.
I know people who, when they use it, mean something more like "complying with the rules tagged 'moral' in the social structure I'm embedded in." I know people who, when they use it, mean something more like "complying with the rules implicit in the nonsocial structure of the world." In both cases, I try to understand by it what I expect them to mean.
My main source is lecture series towards which I linked above. The Newtonian worldview is presented as the lecture that follows after the one I linked.
This "imperial role" business is arguably a rival form of the idea, though Newton did in fact work for the Crown.
At the time the Crown was the head of the church in England.
Why do you think Newton's focus on new observations/experiments came from Cartesian ontology, when Newton doesn't wholly buy that ontology?
I'm saying the popes inadvertently created a separate concept of secular aspirations - often opposed to religious authorities, though not to God if he turns out to exist. This "imperial role" business is arguably a rival form of the idea, though Newton did in fact work for the Crown.
But Newton didn't propose a religious method for science, which is my point. Did you think I meant that the popes turned Dante atheist? What they did was give him a desire for a secular ruler and an "almost messianic sense of the imperial role".
That sort of thinking may have given rise to Descartes' science fiction, so to speak - secular aspirations which go beyond even a New Order of the Ages. So there are a few possible prerequisites for a scientific method. As for someone else writing one down, maybe; what we observe is that the best early formulation came from a brilliant freak.
If anyone is still interested, I've since spun this into a startup called Guesstimate.
https://github.com/getguesstimate/guesstimate-app
http://effective-altruism.com/ea/rv/guesstimate_an_app_for_making_decisions_with/
Asking on StackExchange gives a variety of people before Newton: http://hsm.stackexchange.com/questions/5275/was-isacc-newton-the-first-person-to-articulate-the-scientific-method-in-europe/5277#5277
thereby creating a clearer distinction between religious and secular.
Given that Newton was a person who cared about the religious that would be a bad example. He spent a lot of time with biblical chronology.
You claimed that science wouldn't have been invented at the time without Newton. It's historically no accident that Leibniz discovered calculus independently from Newton. The interest in numerical reasoning was already there.
To get back to the claim, following the scientific method and explicitly writing it down are two different activities. It takes time to move from the implicit to the explicit.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)