Not at all. If I do something that doesn't accomplish my goals that's generally labeled as something like "stupid." If I decide that I want to kill lots of people, the problem with that is ethical even if my goals are fulfilled by it. Most intuitions don't see these as the same thing.
How does this contradicts my notion of ethics? You will surely use what you know about the ethical properties of manslaughter to reach the goal and save yourself from the troubles, like manipulating the public opinion in your favor via, for instance, imitation the target people attacking you. Or even consider if the goal is worthy at all.
Because ethics is essentially simplified applied modeling of other beings.
This seems like a very non-standard notion of what constitutes ethics. Can you expand on this captures the usual intuitions about what the concerns of ethics are?
This seems like a very non-standard notion of what constitutes ethics. Can you expand on this captures the usual intuitions about what the concerns of ethics are?
The concerns of ethics for a given agent is to facilitate one to interact with others effectively, no?
It does not as the other person is parseable as multiple ones as well
That's not obvious. What if one entity is parseable in such a way and another one isn't?
the corresponding ethics will be constructed from special cases of the entity's behaviour, like it was done before.
Why?
I still don't get how the anthropic principle cares about the labels we assign to stuff.
Right. They shouldn't. So situations like this one may be useful intuition pumps.
That's not obvious. What if one entity is parseable in such a way and another one isn't?
Every human produces lots of different kinds of behaviour, so it can be modeled as a pack of specialized agents.
Why?
Because ethics is essentially simplified applied modeling of other beings.
Well, if we put our dual Manfred's in one trolley car, and one person in another, then the ethics might care.
More substantially, once uploads start being a thing, the ethics of these situations will matter.
The other contexts where these issues matter is in anthropics, expectations and trying to understand what the implications of Many-Worlds are. In this case, making the separation completely classical may be helpful: when one cannot understand a complicated situation, looking at a simpler one can help.
It does not as the other person is parseable as multiple ones as well.
Uploading is not a thing atm, and once it is viable, the corresponding ethics will be constructed from special cases of the entity's behaviour, like it was done before.
I still don't get how the anthropic principle cares about the labels we assign to stuff.
How many people am I?
Does it make any difference?
The key property of me in this case is the anthropic one - 'my' existence allows me to infer things about causes of my existence.
It does not as you don't obtain any world properties that 'your' existence should reflect with such a definition.
Video games are nicely described by <https://en.wikipedia.org/wiki/Reinforcement#Schedules>. I'd try make the disgusting part of the gaming, like the resources spent per utility achieved and the methods they are capturing my attention, the most sailent when i go over my decision three to do something.
Regarding "learning to want", it is a matter of constructing and applying a model of your motivation (like i did ↑, but subjectively tailored).
// hey wtf lw.c doesn't respect rfc1738
OP or at least the link in it should be removed promptly to not provide the troll with any free SEO.
How would you build any other skill or habit? I don't really understand how the answer to your specific question would be different.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Please explain how say a trolley problem fits into your framework.
The correct choice is to check out who do you want to be killed and saved more, and what are, for instance, the social consequences of your actions. I don't understand your question, it seems.