Posts

Sorted by New

Wiki Contributions

Comments

It doesn't mean he doesn't really want to be a doctor

You're right. Instead it means that he doesn't have the willpower required to become a doctor. Presumably, this is something he didn't know before he started school.

There is nothing wrong with wanting to be something you are not. But you should also want to have accurate beliefs about yourself. And being a sort of person who prefers beer over charity doesn't make you a bad person. And I have no idea how to you can change your true preferences, even if you want to.

I think the problem isn't that your actions are inconsistent with your beliefs, it's that you have some false beliefs about yourself. You may believe that "death is bad", "charity is good", and even "I want to be a person who would give to charity instead of buying a beer". But it does not follow that you believe "giving to charity is more important to me than buying a beer".

This explanation is more desirable, because if actions don't follow from beliefs, then you have to explain what they follow from instead.

It seems you are no longer ruling out a science of other minds

No, by "mind" I just mean any sort of information processing machine. I would have said "brain", but you used a more general "entity", so I went with "mind". The question of what is and isn't a mind is not very interesting to me.

I've already told you what it would mean

Where exactly?

Is the first half of the conversation meaningful and the second half meaningless?

First of all, the meaningfulness of words depends on the observer. "Robot pain" is perfectly meaningful to people with precise definitions of "pain". So, in the worst case, the "thing" remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can't make a detector if you don't already know what exactly you're trying to detect). We could then simply say that that the people and the scientist are using the same word for different things.

It's also possible that the "thing" was meaningful to everyone to begin with. I don't know what "dubious detectability" is. My bar for meaningfulness isn't as high as you may think, though. "Robot pain" has to fail very hard so as not to pass it.

The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don't expect this sort of poking to cause problems that I couldn't patch, even in the worst case.

category error, like "sleeping idea"

Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that "bitter purple" (or something) was a category error, and your answer was very underwhelming.

I say that "sleeping idea" is meaningless, because I don't have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, "is this idea sleeping" is answered with "no". It's just that I honestly don't have such a restriction. I use the exact same explanation for the meaninglessness of both "fgdghffgfc" and "robot pain".

a contradiction, like "colourless green"

The question "is green colorless" has a perfectly good answer ("no, green is green"), unless you don't think that colors can have colors (in that case it's a category error too). But I'm nitpicking.

starts with your not knowing something, how to detect robot pain

Here you treat detectability as just some random property of a thing. I'm saying that if you don't know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can't possibly say that it exists.

My "unicorn ghost" example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I'd have to make up an argument why this isn't the unicorn I was talking about. "Robot pain" has no such flaws - it is devoid of any traces of meaningfulness.

That is a start, but we can't gather data from entities that cannot speak

If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work.

On the other hand, if the mind is so primitive that it cannot form the thought "X feels a like Y", then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don't necessarily understand what it would mean for a different kind of mind.

and we don't know how to arrive at general rules that apply accross different classes of conscious entity.

Are there classes of conscious entity?

Morality or objective morality? They are different.

You cut off the word "objective" from my sentence yourself. Yes, I mean "objective morality". If "morality" means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you're not talking about "objective morality", you can no longer be confident that those rules make any sense. You can't say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.

We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.

We can derive that model by looking at brain states and asking the brains which states are similar to which.

Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.

They only need to know about robot pain if "robot pain" is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn't make it a real thing or an interesting philosophical question.

It's interesting that you didn't reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.

But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.

No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?

???

Now you imply that they possible could be detected, in which case I withdraw my original claim

Yes, the unicorns don't have to be undetectable be definition. They're just undetectable by all methods that I'm aware of. If "invisible unicorns" have too much undetectability in the title, we can call them "ghost unicorns". But, of course, if you do detect some unicorns, I'll say that they aren't the unicorns I'm talking about and that you're just redefining this profound problem to suit you. Obviously this isn't a perfect analogue for your "robot pain", but I think it's alright.

So, what you're saying, is that you don't know if "ghost unicorns" exist? Why would Occam's razor not apply here? How would you evaluate the likelihood that they exist?

I doubt that's a good thing. It hasn't been very productive so far.

Well, you used it,.

I can also use"ftoy ljhbxd drgfjh". Is that not meaningless either? Seriously, if you have no arguments, then don't respond.

What happens if a robot pain detector is invented tomorrow?

Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You're supposed to be able to check that somehow.

Load More