All of tadasdatys's Comments + Replies

It doesn't mean he doesn't really want to be a doctor

You're right. Instead it means that he doesn't have the willpower required to become a doctor. Presumably, this is something he didn't know before he started school.

1Yosarian2
Right. Maybe not even that; maybe he just didn't have the willpower required to become a doctor on that exact day, and if he re-takes the class next semester maybe that will be different. So, to get back to the original point, I think the original poster was worried about not having the willpower to give to charity and, if he doesn't have that, worried he also might not have the higher levels of willpower you would presumably need to do something truly brave if it was needed (like, in his example, resisting someone like the Nazis in 1930's Germany.) And he was able to use that fear in order to increase his willpower and give more to charity.

There is nothing wrong with wanting to be something you are not. But you should also want to have accurate beliefs about yourself. And being a sort of person who prefers beer over charity doesn't make you a bad person. And I have no idea how to you can change your true preferences, even if you want to.

0torekp
I think there are some pretty straightforward ways to change your true preferences. For example, if I want to become a person who values music more than I currently do, I can practice a musical instrument until I'm really good at it.

I think the problem isn't that your actions are inconsistent with your beliefs, it's that you have some false beliefs about yourself. You may believe that "death is bad", "charity is good", and even "I want to be a person who would give to charity instead of buying a beer". But it does not follow that you believe "giving to charity is more important to me than buying a beer".

This explanation is more desirable, because if actions don't follow from beliefs, then you have to explain what they follow from instead.

2Yosarian2
He might not be wrong about beliefs about himself. Just because a person actually would prefer X to Y, it doesn't mean he is always going to rationally act in a way that will result in X. In a lot of ways we are deeply irrational beings, especially when it comes to issues like short term goals vs long term goals (like charity vs instant rewards). A person might really want to be a doctor, might spend a huge amount of time and resources working his way through medical school, and then may "run out of willpower" or "suffer from a lack of Akrasia" or however you want to put it and not put in the time to study he needs to pass his finals one semester. It doesn't mean he doesn't really want to be a doctor, and if he convinces himself "well I guess I didn't want to be a doctor after all" he's doing himself a disservice when the conclusion he should draw is "I messed up in trying to do something I really want to do, how can I prevent that from happening in the future."
1Rossin_duplicate0.6898194309641386
I think that's a fair assessment, I have an image of myself as the sort of person who would value saving lives over beer and my alarm came from noticing a discrepancy between my self-image and my actions. I am trying to bring the two things in line because that self-image seems like something I want to actually be rather than think I am.
1Dagon
Agreed, though I'd call it "conflicting beliefs" rather than "false beliefs about yourself". You seem to believe that you should be giving the same dollar to charity and to a bartender. And you probably do believe these things, at different times. It contrasts with Akrasia, where you understand that you're acting against your beliefs, but seem to lack willpower or resolve or SOMETHING to make yourself do what you want.

It seems you are no longer ruling out a science of other minds

No, by "mind" I just mean any sort of information processing machine. I would have said "brain", but you used a more general "entity", so I went with "mind". The question of what is and isn't a mind is not very interesting to me.

I've already told you what it would mean

Where exactly?

Is the first half of the conversation meaningful and the second half meaningless?

First of all, the meaningfulness of words depends on the observer. "Robot pain&q... (read more)

category error, like "sleeping idea"

Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that "bitter purple" (or something) was a category error, and your answer was very underwhelming.

I say that "sleeping idea" is meaningless, because I don't have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, "is this idea sleeping" is ans... (read more)

That is a start, but we can't gather data from entities that cannot speak

If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work.

On the other hand, if the mind is so primitive that it cannot form the thought "X feels a like Y", then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previo... (read more)

1TheAncientGeek
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don't feel pain? I've already told you what it would mean, but you have a self-imposed problem of tying meaning to proof. Consider a scenario where two people are discussing something of dubious detectability. Unbeknownst to them, halfway through the conversation a scientist on the other side of the world invents a unicorn detector, tachyone detector, etc. Is the first half of the conversation meaningful and the second half meaningless? What kind of influence travels from the scientists lab?

We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.

We can derive that model by looking at brain states and asking the brains which states are similar to which.

Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.

They only need to know about robot pain if "robot pain" is a phrase that describes something. They could a... (read more)

0TheAncientGeek
That is a start, but we can't gather data from entities that cannot speak , and we don't know how to arrive at general rules that apply accross different classes of conscious entity. As i have previously pointed out, you cannot assume meaninglessness as a default. Morality or objective morality? They are different. Actions directly affect the physical world. Morality guides action, so it indirectly affects the physical world.

But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.

No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?

???

Now you imply that they possible could be detected, in which case I withdraw my original claim

Yes, the unicorns don't have to be undetectable be definition. They're just undetectable by all methods that I'm ... (read more)

2TheAncientGeek
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like "colourless green", or category error, like "sleeping idea". Very low finite rather than infinitessimal or zero. I don't see how this is helping. You have a chain of reasoning that starts with your not knowing something, how to detect robot pain, and ends with your knowing something: that robots don't feel pain. I don't see how that can be valid.

I doubt that's a good thing. It hasn't been very productive so far.

Well, you used it,.

I can also use"ftoy ljhbxd drgfjh". Is that not meaningless either? Seriously, if you have no arguments, then don't respond.

What happens if a robot pain detector is invented tomorrow?

Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You're supposed to be able to check that somehow.

1TheAncientGeek
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood. The implicit argument is that meaning/communication is not restricted to literal truth. What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam's razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.
0entirelyuseless
"Seriously, if you have no arguments, then don't respond." People who live in glass houses shouldn't throw stones.
1cousin_it
300th comment! My post only had 40 before you showed up. LW has been having some persistent people lately, but you (and the people replying to you) take the cake.

You keep saying it s a broken concept.

Yes. I consider that "talking about consciousness". What else is there to say about it?

That anything should feel like anything,

If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I'll need you to paraphrase.

Circular as in

"Everything is made of matter. matter is what everything is made of." ?

Yes, if I had actually said that. By the way, matter exists in you universe too.

Yes: it's relevant beca

... (read more)
1TheAncientGeek
We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience. If you want to know what "pain" means, sit on a thumbtack. That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.

Sure, and if X really is the best approximation of Y that Bob can understand, then again Alice is not dishonest. Although I'm not sure what "approximation" means exactly.

But there is also a case where Alice tells Bob that "X is true", not because X is somehow close to Y, but because, supposedly, X and Y both imply some Z. This is again a very different case. I think this is just pure and simple lying. That is, the vast majority of lies ever told fall into this category (for example, Z could be "you shouldn't jail me", X could ... (read more)

Case 1: Alice tells Bob that "X is true", Bob then interprets this as "Y is true"

Case 2: Alice tells Bob that "X is true", because Bob would be too stupid to understand it if she said "Y is true". Now Bob believes that "X is true".

These two cases are very different. You spend the first half of your post in case 1, and then suddenly jump to case 2 for the other half.

0Bound_up
Supposing that Y is the correct answer to a question, but you are incapable of communicating it to Y, some kind of less or differently true substitute must be used, in terms of the language that they speak and understand

<...> then perhaps telling a lie in a way that you know will communicate a true concept is not a lie.

This is fair.

There are certain truths which literally cannot be spoken to some people.

But this is a completely different case. Lies told to stupid people are still lies, the stupid people don't understand the truth behind them, and you have communicated nothing. You could argue that those lies are somehow justified, but there is no parallel between lying to stupid people and things like "You're the best".

0Bound_up
Can you say it again while tabooing "lie?" My guess is that you're saying that if X says something that they know will be interpreted as abc, then it is a lie even if abc is true, if X personally interprets the statement as xyz, or perhaps if the "true" meaning of the thing is xyz instead of abc

Well, I can imagine a post on SSC with 5 statements about the next week, where other users would reply with probabilities of each becoming true, and arguments for that. Then, after the week, you could count the scores and name the winners in the OP. It would probably get a positive reaction. Why not give it a try?

I'm not sure what the 5 statements should be though. I think it must be "next week" not "next year", because you can't enjoy a game if you've forgotten you're playing it. Also, for it to be a game, it has to be repeatable, but ... (read more)

There are way too many "shoulds" in this post. If anyone can have fun predicting important events at all, then it would probably be people in this forum. Can we make something like this happen? Would we actually want to participate? I'm not sure that I do.

0chaosmage
I'd definitely want to participate, and looking at the yearly predictions SSC and others do, I'm surely not the only one. But someone would have to set it up, run it and advertise it. You don't even strictly need to write software for it. It could be done on any forum, as a thread or series of threads. It could be done here, if this place wasn't so empty nowadays.

That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality

Yes, I said it's not a fact, and I don't want to talk about morality because it's a huge tangent. Do you feel that morality is relevant to our general discussion?

and also don;'t want to talk about consciousness.

What?

A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.

What facts am I fail... (read more)

0TheAncientGeek
Yes: it's relevant because "tortruing robots is wrong" is a test case of whether your definitons are solving the problem or changing the subject. You keep saying it s a broken concept. That anything should feel like anything, Circular as in "Everything is made of matter. matter is what everything is made of." ?

It's obvious - we need buzzfeed to create a "which celebrities will get divorced this year" quiz (with prizes?). There is no way people will be interested in predicting next year's GDP.

There is a common mistake in modeling humans, to think that they are simple. Assuming that "human chose a goal X" implies "human will take actions that optimally reach X" would be silly. Likewise assuming that humans can accurately observe their own internal state is silly. Humans have a series of flaws and limitations that obscure the simple abstractions of goal and belief. However, saying that goals and beliefs do not exist is a bit much. They are still useful in many cases and for many people.

By the way, it sounds a little like you're referring to so some particular set of beliefs. I think naming them explicitly would add clarity.

What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don't.

I'm trying to understand your definitions and how they're different from mine.

I think it is false by occam;'s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam's razor or anything else to it.

I see that for you "meaningless" is a very narrow concept. But does that agree with your stated definition? In what way is "there is an invisible/undetectable unico... (read more)

0TheAncientGeek
Well, you used it,. Its' bad because there's nothign inside the box. It's just a apriori argument.

"Red giant" does not and cannot have precise boundaries

Again, you make a claim and then offer no arguments to support it. "Red giant" is a term defined quite recently by a fairly small group of people. It means what those people wanted it to mean, and its boundaries are as precise as those people wanted them to be.

we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn't.

You started the language discussion, but I have to explain why we're continuing it? I ... (read more)

you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.

What the hell does "not questionable" mean?

Is that a fact or an opinion?

Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that:

"pain is of ethical concern because you don't like it" is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.

"You don't have to involve consciousness here" - has two meanings:
one is "the concept of preference is simpler than the co... (read more)

1TheAncientGeek
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue. You seem to be hinting that the only problem is going against preferences. That theory is contentious. The simplest theory is that nothing exists. A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences. That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality and also don;'t want to talk about consciousness. Of course, I'll need "defined" defined. Do you see how silly this its? You are happy to use 99% of the words in English, and you only complain about the ones that don't fit your apriori ontology. It's a form of question-begging. You used the word , surely you meant something by it. Proper as in proper scotsman?
  1. Useless for communication.

A bit too vague. Can I clarify that as "Useless for communication, because it transfers no information"? Even though that's a bit too strict.

  1. Meaningless statements cannot have truth values assigned to them.

What is stopping me from assigning them truth values? I'm sure you meant, "meaningless statements cannot be proven or disproven". But "proof" is a problematic concept. You may prefer "for meaningless statements there are no arguments in favor or against them", but for statements... (read more)

0TheAncientGeek
The fact that you can't understand them. If you cant understand a statement as exerting the existence of something, it isn't meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don't. I think it is false by occam;'s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam's razor or anything else to it. Because it needs premises along the lines of "what is not measurable is meaningless" and "what is meaningless is false", but you have not been able to argue for either (except by gerrymandered definitions). There's an important difference between stipulating something to be indetectable ... in any way, forever ... and having contingent evidence that we cannot detect something at time T. What happens if a robot pain detector is invented tomorrow? Then you would have doen the thing people are always accusing philosophers of doing: you would have an armchair argument, based on wordplay that is "true" in some way that has nothing to do with reality.

It only explains the "-less" suffix. It's fine as a dictionary definition, but that's obviously not what I asked for. I need you to explain "meaning" as well.

0entirelyuseless
You need no such thing, and as I said, we won't be continuing the discussion of language until you show it has something to do with consciousness.

Google could easily add a module to Google Translate that would convert a statement into its opposite.

No, google could maybe add "not" before every "conscious", in a grammatically correct way, but it is very far from figuring out what other beliefs need to be altered to make these claims consistent. When it can do that, it will be conscious in my book.

You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind.

What is "you" in this sentence? The mute ... (read more)

0entirelyuseless
Yes. I have pointed this out myself. This does not suggest in any way that I have such a reason, other than being conscious. Exactly. This is why tests like "does it say it is conscious?" or any other third person test are not valid. You can only notice that you yourself are conscious. Only a first person test is valid. Exactly, and you calling into question whether the reason I say I am conscious, is because I am actually conscious, does not make it actually questionable. It is not.

You are correct that "I forgot", in the sense that I don't know exactly what you are referring to

Well, that explains a lot. It's not exactly ancient history, and everything is properly quoted, so you really should know what I'm talking about. Yes, it's about the identical table-chairs question from IKEA discussion, the one that I linked to just a few posts above.

Secondly, what I mean is that there are no determinate boundaries to the meaning of the word.

Why are there no determinate boundaries though? I'm saying that boundaries are unclear... (read more)

0entirelyuseless
There is nothing stopping you only in the sense that nothing stops you from asserting falsehoods. (As we see is the case for you personally.) It is intrinsically vague: "Red giant" does not and cannot have precise boundaries, as is true of all words. The same is true of "White dwarf." If you cut the line, you will indeed be cutting it arbitrarily, as you say yourself, and this has nothing to do with the meaning of those words. The rest does not respond to the comparison about consciousness, and as I said we won't be discussing the comments on language.

By acting like you actually want to understand what is being said

I think you already forgot how this particular part of the thread started. First I said that we had established that "X is false", then you disagreed, then I pointed out that I had asked "is X true?" and you had no direct answer. Here I'm only asking you for a black and white answer on this very specific question. I understood your original reply, but I honestly have not idea how it was supposed to answer my specific question. When people refuse to give direct answers t... (read more)

0entirelyuseless
You are correct that "I forgot", in the sense that I don't know exactly what you are referring to, although my guess is to the IKEA discussion. In any case, the response there, and to anything similar that you might be discussing, is that there is no direct answer because the answer is "in some cases yes, in some cases no, depending on the particular circumstances." First of all, all words are vague, so there is no such thing as "what exactly do you mean by." No one ever means anything exactly. Secondly, what I mean is that there are no determinate boundaries to the meaning of the word. Yes. All categories are vague, because they are generated by a process similar to factor analysis, as I have said before, and this does not generate precise categories, but vague ones. It is false that the meanings are arbitrary, for the reasons I have said. It is also false that there is some "absolute and natural concept of a chair," and I have never suggested that there is. This is true of all words. All words are defined either by other words, or by pointing at things, and precise concepts cannot be formed by pointing at things. Therefore no words refer to precise concepts (in fact there is no such thing as a precise concept, as I have already said.) First of all, you are the one who needs the "language 101" stuff, since you have been abusing the concept of meaning by confusing it with testability. Second, nope, we will not be continuing this discussion of language. Not until you show that it has something to do with consciousness. It doesn't. You have been constantly confusing two entirely separate issues, and ignoring me when I point out the separateness of these issues. Let me do so one more time. You have been confusing the idea "this statement has a meaning" with "this statement is testable." Those are two entirely separate things. Likewise, you have been confusing "this statement is vague" with "this statement is not testable." These are two entirely separate things.

The reason why I wrote the previous sentence is because I am conscious.

That's just paraphrasing your previous claim.

how do you know you don't just agree with me about this you whole discussion, and you are mechanically writing statements you don't agree with?

I have no problems here. First, everything is mechanical. Second, a process that would translate one belief into it's opposite, in a consistent way, would be complex enough to be considered a mind of its own. I then identify "myself" with this mind, rather than the one that's mute.

N

... (read more)
0entirelyuseless
It is not just paraphrasing. It is giving an example of a particular case where it is obviously true. Nonsense. Google could easily add a module to Google Translate that would convert a statement into its opposite. That would not give Google Translate a mind of its own. Nope. You identify yourself with the mute mind, and the process converts that into you saying that you identify with the converted mind. Obviously I do not take this seriously, but I take it just as seriously as the claim that my consciousness does not cause me to say that I am conscious. I replied with an example, namely that I say I am conscious precisely because I am conscious. I do not need to argue for this, and I will not.

I means "does not have a meaning."

I'm sure you can see how unhelpful this is.

0entirelyuseless
No.

Robot pain is of ethical concern because pain hurts.

No, pain is of ethical concern because you don't like it. You don't have to involve consciousness here. You involve it, because you want to.

God and homeopathy are meaningful, which is why people are able to mount arguments against them,

Homeopathy is meaningful. God is meaningful only some of the time. But I didn't mean to imply that they are analogues. They're just other bad ideas that get way too much attention.

The ordinary definition for pain clearly does exist, if that is what you mean.

What ... (read more)

0TheAncientGeek
Is that a fact or an opinion? "highly unpleasant physical sensation caused by illness or injury." have you got an exact definition of "concept"? Requiring extreme precision in all things tends to bite you.

Meaningfulness, existence, etc.

It is evident that this is a major source of our disagreement. Can you define "meaningless" for me, as you understand it? In particular, how it applies to grammatically correct statements.

It's perfectly good as a standalone stament

So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I'm talking about are not just undetectable by light, they're also undetectable by all other methods.

0TheAncientGeek
1. Useless for communication. 2. Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless). Where is this going? You can't stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.
0entirelyuseless
I means "does not have a meaning." In general, it doesn't apply to grammatically correct sentences, and definitely not to statements. It's possible that you will find something grammatically correct which is meaningless, but it would have to be severely contorted. If you can ask the question, "How do you know?", then the thing has a meaning. I will show you an example of something meaningless: faheuh fr dhwuidfh d dhwudhdww Note that there is no question of knowing or not knowing anything. When you can ask how you know something or don't know it, then the thing has a meaning.

I perform many human behaviors because I am conscious.

Another bold claim. Why do you think that there is a causal relationship between having consciousness and behavior? Are you sure that consciousness isn't just a passive observer? Also, why do you think that there is no causal relationship between having consciousness and five fingers?

0entirelyuseless
I am conscious. The reason why I wrote the previous sentence is because I am conscious. As for how I know that this statement is true and I am not just a passive observer, how do you know you don't just agree with me about this you whole discussion, and you are mechanically writing statements you don't agree with? Yes, for the above reason. In general, because there is no reason to believe that there is. Notably, the reason I gave for thinking my consciousness is causal is not a reason for thinking five fingers is.

I don't know where you think that was established.

Well, I asked you almost that exact question, you quoted it, and replied with something other than "yes". How was I supposed to interpret that?

So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair

So, if I find one chair-shaped rock, it's not a chair, but if I then take a second rock, sculpt it into the shape of the first rock and sit on it, the second rock is a chair? Would simply sitting on the first rock convert it into a chair?

I can under... (read more)

0entirelyuseless
By acting like you actually want to understand what is being said, rather than by acting like you are in a fight where you are trying to win. In other words, while recognizing that words are vague and pretending that this has something to do with consciousness, you are trying to make me give black or white answers to questions about chairs, black or white answers that do not apply precisely because words are vague. In particular, consider my answer to your next question, because it is basically the same thing again. There is no guarantee of this, because the word "chair" is vague. But it is true that there would be more reason to call the second rock a chair -- that is, the meaning of "chair" would apply more properly to the second one than to the first one. So the answer might be yes, depending on all the other details of the situation. In general, no, because the word "chair" does not work that way. It does not mean what someone happens to sit on, but what is made for sitting on. If you are not ignorant of how the word is used, you do have to involve the intention of the maker.

but you have brought in a bunch of different issues without explaining how they interrelate

Which issues exactly?

No, still not from that.

Why not? Is this still about how you're uncomfortable saying that invisible unicorns don't exist? Does "'robot pain' is meaningless" follow from the same better?

0TheAncientGeek
Meaningfulness, existence, etc. Huh? It's perfectly good as a standalone stament , it's just that it doens't have much to do with meaning or measurabiltiy. Not really, because you haven't explained why meaning should depend on measurability.

If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair.

Yes, correlated variables are evidence, and evidence influences certainty about the classification, but that's not the same as influencing the classification.

And those things are true even given the same form

So if I made two identical objects, with the intention to use one as a chair and another as a coffee table, then one would be a chair and another would be a coffee table? I thought we already established that they wouldn't.

1entirelyuseless
I am not talking about evidence, but about meaning; when we say, "this is a chair," part of the meaning is that it was made for sitting. So for example if you find some random rocks somewhat in the shape of a chair, they will not be a chair, in the ordinary sense of the term, because they were not made for sitting. I don't know where you think that was established. I said the fact that someone made something for a different purpose would not guarantee that it was that thing, as that if you made something in the shape of a hammer, and said it was for sitting, it would not be a chair. But the fact that it was made for sitting does indeed make it closer to being a chair, not by way of evidence or correlation, but directly by more closely matching the meaning of "chair."

But surely, you believe that human-like behavior is stronger evidence than a hand with five fingers. Why is that?

0entirelyuseless
I perform many human behaviors because I am conscious. So the fact that the robot performs similar behaviors is inductive evidence that it performs those behaviors because it is conscious. This does not apply to the number of fingers, which is only evidence by correlation.

Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason.

Why is this a probable reason? You have one data point - yourself. Sure, you have human-like behavior, but you also have many other properties, like five fingers on each hand. Why does behavior seem like a more significant indicator of consciousness than having hands with five fingers? How did you come to that conclusion?

0entirelyuseless
If a robot has hands with five fingers, that will also be evidence that it is conscious. This is how induction works; similarity in some properties is evidence of similarity in other properties.

Ok, do you have any arguments to support that it is causal?

0entirelyuseless
As I said, this is how these words work, that is words like "chair" and so on. If someone made something for sitting, you have more reason to call it a chair. If someone made something -not- for sitting, you have less reason to call it a chair. And those things are true even given the same form, so it is causal, not merely correlated.

Are you saying that we must have dualism, and that consciousness is something that certainly cannot be reduced to "parts moved by other parts"? It's not just that some arrangements of matter are conscious and others are not?

0entirelyuseless
If there are parts, there is also a whole. A whole is not the same as parts. So if you mean by "reductionism" that there are only parts and no wholes, then reductionism is false. If you mean by reductionism that a thing is made of its parts rather than made of its parts plus one other part, then reductionism is true: a whole is made out of its parts, not of the parts plus another part (which would be redundant and absurd.). But it is made "out of" it -- it is not the same as the parts.

It also means not any other thing similar to consciousness, even if not exactly consciousness.

I have not idea what that means (a few typos maybe?). Obviously, there are things that are unconscious but are not machines, so the words aren't identical. But if there is some difference between "mere machine" and "unconscious machine", you have to point it out for me.

My reason is that we have no reason to think that a roomba is conscious.

Hypothetically, what could a reason to think that a robot is conscious look like?

There is no extr

... (read more)
0entirelyuseless
No typos. I meant we know that there are two kinds of things: objective facts and subjective perceptions. As far as anyone knows, there could be a third thing intermediate between those (for example.) So the robot might have something else that we don't know about. Behavior sufficiently similar to human behavior would be a probable, although not conclusive, reason to think that it is conscious. There could not be a conclusive reason. Wrong.

As I said, this is how people use the words.

What words? The word "causal"? I'm asking for arguments why you think that the relationship between intention and classification is causal. I expect you to understand the difference between causation and correlation. Why is this so difficult for you?

0entirelyuseless
I understand the difference, and I have no difficulties here. I said it was causal, not merely correlative.

It is causal, but not infallible.

Ok, do you have any arguments to support that claim?

That's your problem. Everyone else will still call it "the sun,"

That may depend on the specific circumstances of the discovery. Also, different people can use the same words in different ways.

You can make arguments for and against robot pain as well.

Arguments like what?

0entirelyuseless
As I said, this is how people use the words. Like yours, for example.

The word "mere" in that statement means "and not something else of the kind we are currently considering." When I made the statement, I meant that the roomba is not conscious

Oh, so "mere machine" just a pure synonym of "not conscious"? Then I guess you were right about what my problem is. Taboo or not, your only argument why roomba is not conscious, is to proclaim that it is not conscious. I don't know how to explain to you that this is bad.

The roomba just has each part of it moved by other parts

Are you implyi... (read more)

0entirelyuseless
No. I said the roomba "just" has that. Humans are also aware of what they are doing.
0entirelyuseless
No. It also means not any other thing similar to consciousness, even if not exactly consciousness. My reason is that we have no reason to think that a roomba is conscious. There is no extra step between recognizing the similarity of painful experiences and calling them all painful.

it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise.

I don't really know why you derive from this that all statements are meaningless. Maybe we disagree about what "meaningless" means? Wikipedia nicely explains that "A meaningless statement posits nothing of substance with which one could agree or disagree". It's easy for me to see that "undetectable purple unicorns exist" is a meaningless statement, and yet I have no problems with &... (read more)

That's cute.

Seriously though, you have a bad habit of taking my rejection of one extreme (that all grammatically correct statements should be assumed meaningful) and interpreting that as the opposite extreme.

0entirelyuseless
Cute or not, it is simply the logical consequence of what you said, which is that you will consider all statements meaningless unless you can argue otherwise. In reality, you should consider all statements meaningful unless you have a good argument that they are not, and you have provided no such argument for any statement.

but you would probably object that this is just saying it is not conscious

I would also object by saying that a human is also a "mere machine".

the the roomba's actions do not constitute a coherent whole

I have no idea what "coherent whole" means. Is roomba incoherent is some way?

you know quite well what I am talking about here

At times I honestly don't.

By recognizing that it is similar to the other feelings that I have called pain.

Ok, but that just pushes the problem one step back. There are various feelings similar to stubb... (read more)

0entirelyuseless
The word "mere" in that statement means "and not something else of the kind we are currently considering." When I made the statement, I meant that the roomba is not conscious or aware of what it is doing, and consequently it does not perceive anything, because "perceiving" includes being conscious and being aware. In that way, humans are not mere machines, because they are conscious beings that are aware of what they are doing and they perceive things. The human performs the unified action of "perceiving" and we know that it is unified because we experience it as a unified whole. The roomba just has each part of it moved by other parts, and we have no reason to think that these form a unified whole, since we have no reason to think it experiences anything. In all of these cases, of course, the situation would be quite different if the roomba was conscious. Then it would also perceive what it was doing, it would not be a mere machine, and its actions would be unified. The mind does the work of recognizing similarity for us. We don't have to give a verbal description in order to recognize similarity, much less a third person description, as you are seeking here. You're wrong.

This is an obvious fact about how these words are used and does not need additional support.

Wow, you have no idea how many bold claims you make. To clarify once again, when I ask if intention matters, I'm asking whether the relationship between intention and classification is causal, or just a correlation. You are supposed to know the difference between those two things, and you're supposed to know, in theory, how to figure out which one is relevant in a specific case. This whole "does not need additional support" thing inspires no confidence.... (read more)

0entirelyuseless
It is causal, but not infallible. That's your problem. Everyone else will still call it "the sun," and when you say "the sun didn't rise this morning," your statement will still be false. Exactly. You can make arguments for and against robot pain as well. Meaning does not come from the fact that your arguments are strong or weak: the meaning is there first, even before you have any reason to think that the thing is true.

That's not the problem.

Wow, so you agree with me here? Is it not a problem to you at all, or just not "the" problem?

Yes. "Meaningless" , "immeasurable", "unnecessary" and "non existent" all mean different things.

Invisible unicorns are immeasurable. They do not exist. The assumption that they do exist is unnecessary. The statement "invisible unicorns are purple" is meaningless. The words aren't all exactly the same, but that doesn't mean they aren't all appropriate.

Why did it take you so l

... (read more)
0TheAncientGeek
I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you No, still not from that. You can make any sentence come out true or false by juggling definitions...which is why people distrust argument by definition.

You keep saying various words are meaningless.

It's not that words are meaningless, it's that you sometimes apply them in stupid ways. "Bitter" is a fine word, until you start discussing the "bitterness of purple".

Consciousness is in the dictiionary, chariness isn't.

Are dictionary writers the ultimate arbiters of what is real? "Unicorn" is also in the dictionary, by the way.

Consciousness is a concept used by science, chairness isn't.

Physicalist, medical definition of consciousness is used by science. You accuse me of... (read more)

Load More