Comment author: TheAncientGeek 01 September 2014 08:11:37PM 1 point [-]

Consciousness is subjective, so that approach misses the mark.

Comment author: MrCogmor 02 September 2014 12:30:52AM 1 point [-]

That was my point. Philosophy uses subjective words in order to confuse meanings. Once you translate it into one of it's objective interpretations it becomes simple. A good example is the concept of free will.

Comment author: MrCogmor 31 August 2014 11:05:37PM 1 point [-]

Present the complicated problem and then break it down into understandable parts. Much of philosophy is basic but not widely understood because it is obfuscated by multiple meanings and ends up arguing about definitions such as "What is consciousness?". It is helpful to disambiguate these questions by choosing an objective interpretation and then answering that. For example "What is consciousness?" can be defined as "What makes a creature aware of it's environment?" "What process produces thoughts?" "What process produces sensation"?

Comment author: MrCogmor 14 July 2014 09:38:50AM 2 points [-]

In the second paragraph of the quote the author ignores the whole point of replication efforts. We know that scientific studies may suffer from methodological errors. The whole point of replication studies are to identify methodological errors. If they disagree then you know there is an uncontrolled variable or methodological mistake in one or both of them, further studies and the credibility of the experimenters is then used to determine which result is more likely to be true. If the independent studies agree then it is evidence that they are both correct.

The author also argues that replication efforts are biased because they are mostly made by people who disagree with the original study. That seems like a valid point.

Specifying designs in advance is a good idea, though not orignal

Comment author: MrCogmor 14 July 2014 09:24:31AM 1 point [-]

The Mike story can be considered an example of the halo effect if you assume that Mike can interpret the obtuse language better than Jessica can because of his morality. On the other hand if Jessica interpreted it herself she would probably have gotten the same wrong impression of the law as Mike.

Or it could be that Mike interpreted the law correctly but has a few quirks in his morality that you don't. In which case it is not the case of the halo effect and more of a generalization or heuristic failing in a specific instance.

Tvtropes:Broken pedestal has some good examples of the halo effect.

Comment author: blacktrance 22 April 2014 04:07:21PM 0 points [-]

I don't think that's a counterexample. If I had a billionaire uncle who willed me his fortune, I could say something like "I like money but I don't want to commit murder" - and then I wouldn't commit murder. Liking the taste of meat and still abstaining from it because you think eating it is evil is similar.

Comment author: MrCogmor 23 April 2014 10:05:36AM *  0 points [-]

The point of it wasn't to say that people like meat. The point was that people have or expect akrasia from not eating meat enough that they search Google and ask people on question sites for help.

I used to believe like you that if you believe something is morally good then you would do it. That axiom used to be a corner stone in my model of morality. There was actually a stage in my life where my moral superiority provided most of my self esteem and disobeying it was unthinkable. When I encountered belief in belief I couldn't make sense of it at all. I was further confused that they didn't admit it when I explained how they were being inconsistent.

But besides that I don't think humans evolved to have that kind of consistency . I believe that humans act mostly according to reinforcement. Morality does provide a form of reinforcement in the sense that you feel good when you act morally and worse otherwise, however if there was a sufficient external motivator such as extreme torture then you would eventually give in, perhaps rationalizing the decision.

I would suggest the people who have commented here read this post if they haven't yet because there have been two arguments over definitions here already (first with consistency and then the definition of "genuine belief") and there is a reason that is frowned upon. You should also see Belief in belief for better understanding how people can act contrary to their stated morals and behave in contradictory ways. (It typically comes up a lot with religious people, who don't try to be as moral as they can be despite viewing it as good)

Comment author: blacktrance 09 April 2014 06:53:28PM 0 points [-]

I don't think akrasia can apply to the area traditionally considered to be morality. If you believe doing something would be evil, that feels different from it being merely suboptimal and harmful to yourself. For example, you like playing TF2, even though it may be a suboptimal to play it at times, but even though it's a habit, you'd instantly stop doing it if, say, the player avatars in TF2 were real beings that experienced terror, pain, and suffering in the course of gameplay. It stands to reason that eating meat would be the same.

Comment author: MrCogmor 22 April 2014 10:24:12AM *  0 points [-]

I searched for "I want to be vegan but love meat" It was in google autocomplete and has plenty of results including this Yahoo answers page which explicitly mentions that the poster wants to be a vegetarian for ethical reasons.

Comment author: Lumifer 08 April 2014 07:00:13PM -2 points [-]

First, inconsistency is not the same thing as contradiction. If my morals involve consulting a random-number generator at some point, the results will be inconsistent in the sense that I will behave differently in the same situation. That does not imply that some elements of my morals contradict other elements.

Second, I still don't know what does "wrong" mean here.

Comment author: MrCogmor 09 April 2014 04:31:49AM *  5 points [-]

I think you are confusing logical and behavioral consistency here. The OP meant inconsistent in the logical sense, while you are thinking of behavioral consistency. Another context for consistency is matter, where consistency refers to the viscosity of the material. In each case it refers to how resilient (or resistant to damage) something is.

Comment author: Ander 18 January 2014 12:18:53AM *  0 points [-]

I think that your position on destructive uploads doesn't make sense, and you did a great job of showing why with your thought experiment.

The fact that you can transition yourself over time to the machine, and you still consider it 'you', and you cant actually tell at what specific line you crossed in order to become a 'machine', means that your original state (human brain) and final state (upload) are essentially the same.

Comment author: MrCogmor 18 January 2014 02:37:06AM 0 points [-]

Error isn't implying that the final state is different. Just that the destructive copy process is a form of death and the wired brain process isn't.

I get where he is coming from, a copy is distinct from the original and can have different experiences. In the destructive copy scenario a person is killed and a person is born, In the wired brain scenario the person is not copied they merely change over time and nobody dies.

My view is that if I die to make a upload (which is identical to me except for greater intelligence & other benefits) then I think the gain outweighs the loss.

Comment author: mwengler 17 January 2014 09:28:42PM 3 points [-]

If a human was artificial, would it be considered FAI or UAI? I'm guessing UAI because I don't think anything like the process of CEV has been followed to set human's values at birth.

If a human would be UAI if artificial, why are we less worried about billions of humans than we are about 1 UAI? What is it about being artificial that makes unfriendliness so scary? What is it about being natural that makes us so blind to the possible dangers of unfriendliness?

It is that we don't think humans can self-modify? The way tech is going it seems to me that its at least a horse-race (approximately 50:50 probability) as to which will FOOM first: the ability for humans to enhance themselves vs. the ability for an AI to modify itself.

Should we be more worried about UNI, unfriendly natural intelligence, meaning are we optimally dividing our efforts between avoiding UAI vs avoiding UNI given the relative probability weighted dangers each presents?

Comment author: MrCogmor 18 January 2014 01:23:27AM 5 points [-]

Humans would be considered UFAI if they were digitised. Merely consider a button that picks a random human and gives them absolute control. I wouldn't press that button because their is a significant chance that such a person will have goals that significantly differed from my own.

Comment author: Roxolan 17 January 2014 04:27:35PM 5 points [-]

(Reposted from the LW facebook group)

The next LW Brussels meetup will be about morality, and I want to have a bunch of moral dilemmas prepared as conversation-starters. And I mean moral dilemmas that you can't solve with one easy utilitarian calculation. Some in the local community have had little exposure to LW articles, so I'll definitely mention standard trolley problems and "torture vs dust specks", but I'm curious if you have more original ones.

It's fine if some of them use words that should really be tabooed. The discussion will double as a taboo exercise.

A lot of what I came up with revolves around the boundaries of sentience. I.e. on a scale that goes from self-replicating amino acid to transhumans (and includes animals, babies, the heavily mentally handicapped...), where do you place things like "I have a moral responsibility to uplift those to normal human intelligence once the technology is available" or "it's fine if I kill/eat/torture those", and how much of one kind of life you'd be willing to trade off for a superior kind. Do I have a moral responsibility to uplift babies? Uh-

Trading off lives for things whose value is harder to put on the same scale is also interesting. I.e. "will you save this person, or this priceless cultural artifact, or this species near extinction." (Yes, I've seen the SMBC.)

Comment author: MrCogmor 18 January 2014 01:14:14AM 3 points [-]

I thought of this moral dillema

There are two options.

  1. You experience a significant amount of pain, 5 minutes later you completly forget about the experience as if you were never in pain at all.
  2. You experience a slightly less amount of pain then option 1 but you don't forget it.

Which one would you choose?

View more: Next