What do you mean by "randomly feels like it"? Maybe he wants some fresh air or something... Then it's a personal motivation, and my answer is (d) not relevant to ethic. The discussion in this article was not, I think, about casual goals like climbing a mountain, but about the goals in your life, the important things to do (maybe I should use the term "finalities" instead). It was a matter of ethic.
If Bob believes that climbing this mountain is good or important while he admits that his only motivation is "randomly feeling like it", then I call his belief absurd.
Maybe I misunderstood a bit your point. I understood:
Now I understand:
In other words, you were talking about shortsightedness when I thought it was delusion?
Ok. I would replace "Being grateful for an action" by "recognizing that an action is important/beneficial". Pursuing a pure gratitude-free goal would mean: pursuing a goal that nobody think is beneficial or important to do (except you, because you do it), and supposedly nobody ever will. My claim is that such action is absurd from an ethical (universalist) perspective.
At T1, B is "subjectively true" (I believe that B). However it's not an established truth. From the point of view of the whole society, the result needs replication: what if I was deceiving everyone? At T2, B is controversial. At T3, B is false.
Now is the status of B changing over time? That's a good question. I would say that the status of B is contextual. B was true at T1 to the extent of the actions I had performed at that time. It was "weakly true" because I had not checked every flaws in my instruments. It became false in the context of T3. Similarly, one could say that Newtonian physics is true in the context of slow speeds and weak energies.
I don't think so. Let me precise that my thoughts are to be understood from an ethical perspective: by "goal" I mean something that deserves to be done, in other words, "something good". I start from the assumption that having a goal supposes thinking that it's somehow something good (=something I should do), which is kind of tautologic.
Now I am only suggesting that a goal that does not deserve any gratitude can't be "good" from an ethical point of view.
Moreover, I am not proclaming I am purely seeking gratitude in all my actions.
Let me justify my position.
Gratitude-free actions are absurd from an ethical point of view, because we do not have access to any transcendant and absolute notion of "good". Consequently, we have no way to tell if an action is good if noone is grateful for it.
If you perform a gratitude-free action, either it's only good for you: then you're selfish, and that's far from the universal aim of ethics. Either you you believe in a transcendant notion of "good", together with a divine gratitude, which is a religious position.
Seeking gratitude has nothing to do with selfishness., on the contrary. Something usually deserve gratitude if it benefits others. My position is very altruistic.
My view is very altruistic on the contrary : seeking gratitude is seeking to perform actions that benefits others or the whole society. Game theoretic considerations would justify being selfish, which does not deserve gratitude at all.
In my view, going from subjective truth to universal (inter-subjective) truth requires agreement between different people, that is, convincing others (or being convinced). I hold a belief because it is reliable for me. If it is reliable for others as well, then they'll probably agree with me. I will convince them.
Ethic is not about predicting perceptions but conducting actions.
It's absurd from an ethical point of view, as a finality. I was implicitely talking in the context of pursuing "important goals", that is, valued on an ethical basis. Abnegation at some level is an important part of most religious doctrines.
Yes in a sense. The pragmatic conception of truth holds that we do not have access to an absolute truth, nor to any system as it is "in itself", but only to our beliefs and representations of systems. All we can do is test our beliefs and the accuracy of our representations.
Within that conception, a belief is true if "it works", that is, if it can be successfully confronted to other established belief systems and serve as a base for action with expectied result (e.g. scientific inquiry). Incidentally, there is no truth outside our beliefs, and truth is always temporary. A truth could be considered universal if it could convince everyone.
This is a bit caricatural. I made my statement as simple as possible for the sake of the argument, but I subscribe to the pragmatic theory of truth.
Sorry, but I am only refining the statement I made from the start, which in my view is still perfectly relevant to the material. You don't agree with me, now let's not loose too much time on meta-discussions...
I understand your concern about the problems mentioned in the article, and your feeling that I don't address them. You're right, I don't: my feeling about these problems is that they occur in complex situations where lots of actors are involved, and i am not convinced at all that they result from a lack of motivation or a problem of unconscious motivation hijacking.
Imagine that in the current discussion, we suddenly realize that we've been writing all that time not to find the truth, but to convince each other (which I think is actually the case). It would be one of those situations where someone like Kaj Sotala would say: "it seems you're deeply motivated in finding the truth, but you're only trying to make people think you have the truth (=convince them)". Then my point would be: unless you're cynical, convincing and finding the truth are exactly the same. If you're cynical, you just think short term and ...
I think that defocussing a bit and taking the outside view for a second might be clarifying, so let's not talk about what it is exactly that people do.
Kaj Sotala says that he has identified something which constitutes a major problem source, with exemplary problems a) - f), all very real problems like failing charities and people being unable to work from home. Then you come, and say "there is no problem here," that everything boils down to us just using the wrong definition of motivation (or something). But what's with the charities that can't f...
I don't make a difference between seeking social gratitude and having a goal. In my view, sincere motivation is positively correlated with seeking social gratitude. You can make an analogy with markets if you want: social gratitude is money, motivation is work. If something is worth doing, it will deserve social gratitude. In my view, the author here appears to be complaining that we are working for money and not for free...
I don't make a difference between having a goal and seeking gratitude for that goal, it's exactly the same for me. Something is important if it deserve a lot of gratitude, something is not if it does not. That's all. The "gratitude" part is intrinsic.
If you accept my view, Kaj Sotala's statement is a nonsense: it can't turn out to be strategic self-deception when we thought we were deeply motivated, we're seeking gratitude from the start (which is precisely what "being deeply motivated" means. If at one point we discover that we've been looking for gratitude all that time, then we don't discover that we've been fooling ourself, we're only beginning to understand the true nature of any goal.
Let me rephrase.
The assumption that there would exist pure gratitude-free goals is a myth: pursuing such goals would be absurd. (people who seem do perform gratitude-free actions are often religious people: they actually believe in divine gratitude).
Therefore social gratitude is an essential component of any goal and thus it is not correlated with lack of sincere motivation, nor does it "downgrade" the goal to something less important. It's just part of it.
I deny that having a goal as a "strategic feature" is incompatible with being sincerly and deeply motivated. That's my point.
More precisely : either one is consciously seeking gratitude, then he/she is cynical, but I think this is rarely the case. Either seeking gratitude is only one aspect of a goal that is sincerly pursued (which means that one wants to deserve that gatiude for real). Then there is no problem, the motivation is there.
Once we've imagined the atoms-arranged-chairwise, that's all it is to be a chair. It's analytic. But there's no such conceptual connection between neurons-instantiating-computations and consciousness, which arguably precludes identifying the two.
That's true. The difference between chairs and consciousness is that chair is a 3rd person concept, whereas consciousness is a 1st person concept. Imagining a world without consciousness is easy, because we never know if there are consciousnesses or not in the world - consciousness is not an empirical data, it's something we speculate other have by analogy with ourselves.
Inside your representation, I might be a person, and I do represent myself as a person sometimes.
Acid test (1) and (2): this is where dogma starts.
The pb here is that "I believe P" supposes a representation / a model of P. There must be a pre-existing model prior to using Cox's theorem on something. My question is semantic: what does this model lie on? The probabilities you will get will depend on the model you will adopt, and I am pretty sure that there is no definitive model/conception of anything (see the problem of translation analysed by Quine for example).
Ok, it depends what you mean by "information about". My understanding is that we have no information on the nature of reality, which does not mean that we have no information from reality.
I also view philosophy as a meta-science. I think language is relational by nature (e.g. red refer to the strong correlation between our respective experiences of red) and is blind to singularity (I cannot explain by mean of language what it is like for me to see red, I can only give it a name, which you can understand only if my red is correlated to yours - my singular red cannot be expressed).
Since science is a product of language, its horizon is describing the relational framework of existing things, which are unspeakable. That's exactly what science c...
I define qualia as the elements of my subjective experience. "That sounds obvious" was an euphemism. It's more than obvious that qualia are real, it's given, it is the only truth that does not need to be proven.
Well my opinion is that the confusion between representation and reality is on your side.
Indeed, a scientific model is a representation of reality - not reality. It can be found inside books or learned at school, it is interpreted. On the contrary, qualia are not represented but directly experienced. They are real.
That sounds obvious. No?
Your code is a list of characters in a text file, or a list of bytes in your computer's memory. Only you interpret it as a code that interprets something.
I prefer this formulation, because you emphasize on the representational aspect. Now a representation (a conceptualization) requires someone that conceptualize/represents things. I think that this "useful and clarifying way" just forget that a representation is always relative to a subject. The last part of the sentence only expresses your proud ignorance (sorry)...
I think it does.
{"hello", "world"} is a set of lighted pixels on my screen, or a list of characters in a text file containing source code, or a list of bytes in my computer's memory, but in any case, there must be an observer so that they can be interpreted as a list of string. The real list of string only exists inside my representation.
The downvote corporatist system of this site is extremely annoying. I am proposing a valid and relevant argument. I expect counter-arguments from people who disagree, not downvotes. Why not keep downvotes for not-argumented/irrelevant comments?
Agreed.
What I mean is that the notion of algorithm is always relative to an observer. Something is an algorithm because someone decides to view it as an algorithm. She/He decides what its inputs are and what its outputs. She/He decides what is the relevant scale for defining what a signal is. All these decisions are arbitrary (say I decide that the text-processing algorithm that runs on my computer extends to my typing fingers and the "calculation" performed by the molecules of them - why not? My hand is part of my computer. Does my computer "feel ...
They may not be obviously wrong, but the important point is that it remains a pure metaphysical speculation and that other metaphysical systems exist, and other people even deny that any metaphysical system can ever be "true" (or real or whatever). The last point is rather consensual among modern philosophers: it is commonly assumed that any attempt to build a definitive metaphysical system will necessarily be a failure (because there is no definitive ground on which any concept rests). As a consequence, we have to rely on pragmatism (as you did in a previous comment). But anyway, the important point is that different approaches exist, and none is a definitive answer.
I don't think relying on algorithm solves the issue, because you still need someone to implement and interpret the algorithm.
I agree with your second point: you can take a pragmatist approach. Actually, that's a bit how science work. But still you did not prove in anyway that your model is a complete and definitive description of all there is nor that it can be strictly identifiable with "reality", and Kant's argument remains valid. It would be more correct to say that a scientific model is a relational model (it describes the relations between things as they appear to observers and their regularities).
My question is what does "happiness" rest upon? A probability of what? You need to have an apriori model oh what hapiness is in order to measure it (that is, a theory of mind), which you have not. Verifying your model depends on your model...
The nature of logical reasoning is actually a deep philosophical question...
You know what an algorithm is, but do you know if you are an algorithm? I am not sure to understand why you need algorithm at all. Maybe your point is "If you are a human being X that receive an input Y, this allows you to know a nontrivial fact about reality (...)". I tend to agree with that formulation, but again, this supposes some concepts that do not go without saying, and in particular, it supposes a realist approach. Idealist philosophers would disagree.
I can unde...
If I am a cognitive algorithm X that reveives input Y, I don't necessarily know what an algorithm is, what an input is, and so on. One could argue that all I know is 'Y'. I don't necessarily have any idea of what a "possible reality" is. I might not have a concept of "possibility" nor of "reality".
Your way of thinking presupposes many metaphysical concepts that have been questioned by philosophers, including Kant. I am not saying that this line of reasoning is invalid (I suspect it is a realist approach, which is a fair option). My personal feeling is that Kant is upstream of that line of reasoning.
I agree, and I really doubt philosophers fail to deeply question their own intuitions.
In a sense, science is nothing but experimental philosophy (in a broad sense), and the job of non-experimental-philosophy (what we label philosophy) is to make any question become an experimental question... But I would say that philosophy remains important as the framework where science and scientific fundamental concepts (truth, reality, substance) are defined and discussed.
I wonder if the same mechanisms could be invovled in conspiracy theorists. Their way of thinking seems very similar. I also suspect a reinforcement mechanism: it becomes more and more difficult for the subject to deny his own beliefs, as it would require abandonning large parts of his present (and coherent) belief system, leaving him with almost nothing left.
This could explain why patients are reluctant to accept alternative versions afterwards (such as "you have a brain damage").