Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Yosarian2 09 September 2017 11:16:52AM *  1 point [-]

He might not be wrong about beliefs about himself. Just because a person actually would prefer X to Y, it doesn't mean he is always going to rationally act in a way that will result in X. In a lot of ways we are deeply irrational beings, especially when it comes to issues like short term goals vs long term goals (like charity vs instant rewards).

A person might really want to be a doctor, might spend a huge amount of time and resources working his way through medical school, and then may "run out of willpower" or "suffer from a lack of Akrasia" or however you want to put it and not put in the time to study he needs to pass his finals one semester. It doesn't mean he doesn't really want to be a doctor, and if he convinces himself "well I guess I didn't want to be a doctor after all" he's doing himself a disservice when the conclusion he should draw is "I messed up in trying to do something I really want to do, how can I prevent that from happening in the future."

Comment author: tadasdatys 09 September 2017 06:00:24PM 0 points [-]

It doesn't mean he doesn't really want to be a doctor

You're right. Instead it means that he doesn't have the willpower required to become a doctor. Presumably, this is something he didn't know before he started school.

Comment author: Rossin 08 September 2017 06:31:40PM 1 point [-]

I think that's a fair assessment, I have an image of myself as the sort of person who would value saving lives over beer and my alarm came from noticing a discrepancy between my self-image and my actions. I am trying to bring the two things in line because that self-image seems like something I want to actually be rather than think I am.

Comment author: tadasdatys 09 September 2017 05:57:39PM 0 points [-]

There is nothing wrong with wanting to be something you are not. But you should also want to have accurate beliefs about yourself. And being a sort of person who prefers beer over charity doesn't make you a bad person. And I have no idea how to you can change your true preferences, even if you want to.

Comment author: tadasdatys 08 September 2017 09:52:17AM 1 point [-]

I think the problem isn't that your actions are inconsistent with your beliefs, it's that you have some false beliefs about yourself. You may believe that "death is bad", "charity is good", and even "I want to be a person who would give to charity instead of buying a beer". But it does not follow that you believe "giving to charity is more important to me than buying a beer".

This explanation is more desirable, because if actions don't follow from beliefs, then you have to explain what they follow from instead.

Comment author: TheAncientGeek 24 August 2017 11:56:19AM 1 point [-]

If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work.

It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don't feel pain?

but I don't necessarily understand what it would mean for a different kind of mind.

I've already told you what it would mean, but you have a self-imposed problem of tying meaning to proof.

Consider a scenario where two people are discussing something of dubious detectability.

Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc.

Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?

Comment author: tadasdatys 24 August 2017 08:18:19PM 0 points [-]

It seems you are no longer ruling out a science of other minds

No, by "mind" I just mean any sort of information processing machine. I would have said "brain", but you used a more general "entity", so I went with "mind". The question of what is and isn't a mind is not very interesting to me.

I've already told you what it would mean

Where exactly?

Is the first half of the conversation meaningful and the second half meaningless?

First of all, the meaningfulness of words depends on the observer. "Robot pain" is perfectly meaningful to people with precise definitions of "pain". So, in the worst case, the "thing" remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can't make a detector if you don't already know what exactly you're trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.

It's also possible that the "thing" was meaningful to everyone to begin with. I don't know what "dubious detectability" is. My bar for meaningfulness isn't as high as you may think, though. "Robot pain" has to fail very hard so as not to pass it.

The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don't expect this sort of poking to cause problems that I couldn't patch, even in the worst case.

Comment author: TheAncientGeek 23 August 2017 11:17:29AM *  2 points [-]

I asked you before to propose a meaningless statement of your own.

And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like "colourless green", or category error, like "sleeping idea".

So, what you're saying, is that you don't know if "ghost unicorns" exist? Why would Occam's razor not apply here? How would you evaluate the likelihood that they exist?

Very low finite rather than infinitessimal or zero.

I don't see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don't feel pain. I don't see how that can be valid.

Comment author: tadasdatys 24 August 2017 08:18:15PM 0 points [-]

category error, like "sleeping idea"

Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that "bitter purple" (or something) was a category error, and your answer was very underwhelming.

I say that "sleeping idea" is meaningless, because I don't have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, "is this idea sleeping" is answered with "no". It's just that I honestly don't have such a restriction. I use the exact same explanation for the meaninglessness of both "fgdghffgfc" and "robot pain".

a contradiction, like "colourless green"

The question "is green colorless" has a perfectly good answer ("no, green is green"), unless you don't think that colors can have colors (in that case it's a category error too). But I'm nitpicking.

starts with your not knowing something, how to detect robot pain

Here you treat detectability as just some random property of a thing. I'm saying that if you don't know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can't possibly say that it exists.

My "unicorn ghost" example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I'd have to make up an argument why this isn't the unicorn I was talking about. "Robot pain" has no such flaws - it is devoid of any traces of meaningfulness.

Comment author: TheAncientGeek 18 August 2017 02:26:48PM *  0 points [-]

We can derive that model by looking at brain states and asking the brains which states are similar to which.

That is a start, but we can't gather data from entities that cannot speak , and we don't know how to arrive at general rules that apply accross different classes of conscious entity.

They only need to know about robot pain if "robot pain" is a phrase that describes something.

As i have previously pointed out, you cannot assume meaninglessness as a default.

morality, which has many of the same problems as consciousness, and is even less defensible.

Morality or objective morality? They are different.

Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.

Comment author: tadasdatys 18 August 2017 06:31:41PM 0 points [-]

That is a start, but we can't gather data from entities that cannot speak

If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work.

On the other hand, if the mind is so primitive that it cannot form the thought "X feels a like Y", then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don't necessarily understand what it would mean for a different kind of mind.

and we don't know how to arrive at general rules that apply accross different classes of conscious entity.

Are there classes of conscious entity?

Morality or objective morality? They are different.

You cut off the word "objective" from my sentence yourself. Yes, I mean "objective morality". If "morality" means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you're not talking about "objective morality", you can no longer be confident that those rules make any sense. You can't say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.

Comment author: TheAncientGeek 16 August 2017 03:20:21PM 1 point [-]

If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that

We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.

If it refers to something else, then I'll need you to paraphrase.

If you want to know what "pain" means, sit on a thumbtack.

You can say "torture is wrong", but that has no implications about the physical world

That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.

Comment author: tadasdatys 16 August 2017 05:10:40PM 0 points [-]

We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.

We can derive that model by looking at brain states and asking the brains which states are similar to which.

Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.

They only need to know about robot pain if "robot pain" is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn't make it a real thing or an interesting philosophical question.

It's interesting that you didn't reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.

Comment author: TheAncientGeek 16 August 2017 03:06:47PM *  1 point [-]

I can also use"ftoy ljhbxd drgfjh"

But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.

If you have no arguments, then don't respond.

The implicit argument is that meaning/communication is not restricted to literal truth.

Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow?

What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam's razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.

Comment author: tadasdatys 16 August 2017 05:10:28PM 0 points [-]

But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.

No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?

???

Now you imply that they possible could be detected, in which case I withdraw my original claim

Yes, the unicorns don't have to be undetectable be definition. They're just undetectable by all methods that I'm aware of. If "invisible unicorns" have too much undetectability in the title, we can call them "ghost unicorns". But, of course, if you do detect some unicorns, I'll say that they aren't the unicorns I'm talking about and that you're just redefining this profound problem to suit you. Obviously this isn't a perfect analogue for your "robot pain", but I think it's alright.

So, what you're saying, is that you don't know if "ghost unicorns" exist? Why would Occam's razor not apply here? How would you evaluate the likelihood that they exist?

Comment author: cousin_it 16 August 2017 01:55:52PM *  1 point [-]

300th comment! My post only had 40 before you showed up. LW has been having some persistent people lately, but you (and the people replying to you) take the cake.

Comment author: tadasdatys 16 August 2017 03:50:15PM 0 points [-]

I doubt that's a good thing. It hasn't been very productive so far.

Comment author: TheAncientGeek 15 August 2017 02:28:36PM 0 points [-]

In what way is "there is an invisible/undetectable unicorn in your room" not "useless for communication"?

Well, you used it,.

I can give you a robot pain detector today. It only works on robots though. The detector always says "no". The point is that you have no arguments why this detector is bad.

Its' bad because there's nothign inside the box. It's just a apriori argument.

Comment author: tadasdatys 15 August 2017 05:59:40PM 0 points [-]

Well, you used it,.

I can also use"ftoy ljhbxd drgfjh". Is that not meaningless either? Seriously, if you have no arguments, then don't respond.

What happens if a robot pain detector is invented tomorrow?

Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You're supposed to be able to check that somehow.

View more: Next