All of quen_tin's Comments + Replies

I wonder if the same mechanisms could be invovled in conspiracy theorists. Their way of thinking seems very similar. I also suspect a reinforcement mechanism: it becomes more and more difficult for the subject to deny his own beliefs, as it would require abandonning large parts of his present (and coherent) belief system, leaving him with almost nothing left.

This could explain why patients are reluctant to accept alternative versions afterwards (such as "you have a brain damage").

5scav
It seems to me that many people who believe extremely improbable conspiracy theories may well have undiagnosed brain damage. But you probably couldn't get most of them to agree to come in for a brain scan.

What do you mean by "randomly feels like it"? Maybe he wants some fresh air or something... Then it's a personal motivation, and my answer is (d) not relevant to ethic. The discussion in this article was not, I think, about casual goals like climbing a mountain, but about the goals in your life, the important things to do (maybe I should use the term "finalities" instead). It was a matter of ethic.

If Bob believes that climbing this mountain is good or important while he admits that his only motivation is "randomly feeling like it", then I call his belief absurd.

Maybe I misunderstood a bit your point. I understood:

  • "I thought I wanted to work for a great cause but it turned out I only wanted to be the kind of person who work for a great cause"

Now I understand:

  • "I really wanted to work for a great cause, but it turned out all my actions were directed toward giving the impression, in the short-term, that I was"

In other words, you were talking about shortsightedness when I thought it was delusion?

0Kaj_Sotala
Yes, I think you could put it that way.

Ok. I would replace "Being grateful for an action" by "recognizing that an action is important/beneficial". Pursuing a pure gratitude-free goal would mean: pursuing a goal that nobody think is beneficial or important to do (except you, because you do it), and supposedly nobody ever will. My claim is that such action is absurd from an ethical (universalist) perspective.

4TimS
I don't understand what you mean by "absurd." Bob tells you that he is going to climb a boring and uninteresting mountain because he randomly feels like it. There's nothing to see there that couldn't be seen elsewhere, and everyone else thinks that climbing that mountain is pointless. Omega verifies that Bob has no other motivation for climbing the mountain. Would you say that Bob's desire to climb the mountain is (a) mentally defective (i.e. insane), (b) immoral, (c) impossible, (d) not relevant to your point, or (e) something else?

At T1, B is "subjectively true" (I believe that B). However it's not an established truth. From the point of view of the whole society, the result needs replication: what if I was deceiving everyone? At T2, B is controversial. At T3, B is false.

Now is the status of B changing over time? That's a good question. I would say that the status of B is contextual. B was true at T1 to the extent of the actions I had performed at that time. It was "weakly true" because I had not checked every flaws in my instruments. It became false in the context of T3. Similarly, one could say that Newtonian physics is true in the context of slow speeds and weak energies.

0TheOtherDave
OK, thanks for clarifying; I think I understand your view now.

I don't think so. Let me precise that my thoughts are to be understood from an ethical perspective: by "goal" I mean something that deserves to be done, in other words, "something good". I start from the assumption that having a goal supposes thinking that it's somehow something good (=something I should do), which is kind of tautologic.

Now I am only suggesting that a goal that does not deserve any gratitude can't be "good" from an ethical point of view.

Moreover, I am not proclaming I am purely seeking gratitude in all my actions.

0TimS
We seem to be having a definitional problem. Perhaps if you taboo the word gratitude, then we might understand your position better.

Let me justify my position.

Gratitude-free actions are absurd from an ethical point of view, because we do not have access to any transcendant and absolute notion of "good". Consequently, we have no way to tell if an action is good if noone is grateful for it.

If you perform a gratitude-free action, either it's only good for you: then you're selfish, and that's far from the universal aim of ethics. Either you you believe in a transcendant notion of "good", together with a divine gratitude, which is a religious position.

Seeking gratitude has nothing to do with selfishness., on the contrary. Something usually deserve gratitude if it benefits others. My position is very altruistic.

0Vladimir_Nesov
The error in reasoning is analogous.

My view is very altruistic on the contrary : seeking gratitude is seeking to perform actions that benefits others or the whole society. Game theoretic considerations would justify being selfish, which does not deserve gratitude at all.

In my view, going from subjective truth to universal (inter-subjective) truth requires agreement between different people, that is, convincing others (or being convinced). I hold a belief because it is reliable for me. If it is reliable for others as well, then they'll probably agree with me. I will convince them.

0TheOtherDave
So, at the risk of caricaturing your view again, consider the following scenario: At time T1, I observe some repeatable phenomenon X. For the sake of concreteness, suppose X is my underground telescope detecting a new kind of rock formation deep underground that no person has ever before seen... that is, I am the discoverer of X. At time T2, I publish my results and show everyone X, and everyone agrees that yes, there exists such a rock formation deep underground. If I've understood you correctly, you would say that if B is the belief that there exists such a rock formation deep underground, then at T1 B is "subjectively true," prior to T1 B doesn't exist at all, and at T2 B is "inter-subjectively or universally true". Is that right? Let's call NOT(B) the denial of B -- that is, NOT(B) is the belief that such rock formations don't exist. At times between T1 and T2, when some people believe B and others believe NOT(B) with varying degrees of confidence, what is the status of B in your view? What is the status of NOT(B)? Are either of those beliefs true? And if I never report X to anyone else, then B remains subjectively true, but never becomes inter-subjectively true. Yes? Now suppose that at T3, I discover that my tools for scanning underground rock formations were flawed, and upon fixing those tools I no longer observe X. Suppose I reject B accordingly. I report those results, and soon nobody believes B anymore. On your view, what is the status of B at T3? Is it still intersubjectively true? Is it still subjectively true? Is it true at all? Does knowing the status of B at T3 change your evaluation of the status of B at T2 or T1?

Ethic is not about predicting perceptions but conducting actions.

quen_tin-40

It's absurd from an ethical point of view, as a finality. I was implicitely talking in the context of pursuing "important goals", that is, valued on an ethical basis. Abnegation at some level is an important part of most religious doctrines.

0TimS
Is the following a reasonable paraphrase of your position:
2lessdazed
What prediction about the world can you make from these beliefs? What would be less - or more - surprising to you than to those with typical beliefs here?

Yes in a sense. The pragmatic conception of truth holds that we do not have access to an absolute truth, nor to any system as it is "in itself", but only to our beliefs and representations of systems. All we can do is test our beliefs and the accuracy of our representations.

Within that conception, a belief is true if "it works", that is, if it can be successfully confronted to other established belief systems and serve as a base for action with expectied result (e.g. scientific inquiry). Incidentally, there is no truth outside our beliefs, and truth is always temporary. A truth could be considered universal if it could convince everyone.

3TheOtherDave
I'm entirely on board with endorsing beliefs that can successfully serve as a basis for action with expected results by calling them "true," and on board with the whole "we don't have access to absolutes" thing. I am not on board with endorsing beliefs as "true" just because I can convince other people of them. You seem to be talking about both things at once, which is why I'm confused. Can you clarify what differences you see (if any) between "it works/it serves as a reliable basis for action" on the one hand, and "it can convince people" on the other, as applied to a belief, and why those differences matter (if they do)?

This is a bit caricatural. I made my statement as simple as possible for the sake of the argument, but I subscribe to the pragmatic theory of truth.

2lessdazed
"They fuck you up, count be wrong" Kid in The Wire, Pragmatist Special Episode when asked how he could keep count of how many vials of crack were left in the stash but couldn't solve the word problem in his math homework.
4TheOtherDave
I made my example extreme to make it easy for you to confirm or refute. But given your refutation, I honestly have no idea what you mean when you suggest that the only proper definition of truth is what convinces the most people in the long run. It sure sounds like you're saying that the truth about a system is a function of people's beliefs about that system rather than a function of the system itself.

Sorry, but I am only refining the statement I made from the start, which in my view is still perfectly relevant to the material. You don't agree with me, now let's not loose too much time on meta-discussions...

I understand your concern about the problems mentioned in the article, and your feeling that I don't address them. You're right, I don't: my feeling about these problems is that they occur in complex situations where lots of actors are involved, and i am not convinced at all that they result from a lack of motivation or a problem of unconscious motivation hijacking.

quen_tin-30

Imagine that in the current discussion, we suddenly realize that we've been writing all that time not to find the truth, but to convince each other (which I think is actually the case). It would be one of those situations where someone like Kaj Sotala would say: "it seems you're deeply motivated in finding the truth, but you're only trying to make people think you have the truth (=convince them)". Then my point would be: unless you're cynical, convincing and finding the truth are exactly the same. If you're cynical, you just think short term and ... (read more)

Fleisch100

I think that defocussing a bit and taking the outside view for a second might be clarifying, so let's not talk about what it is exactly that people do.

Kaj Sotala says that he has identified something which constitutes a major problem source, with exemplary problems a) - f), all very real problems like failing charities and people being unable to work from home. Then you come, and say "there is no problem here," that everything boils down to us just using the wrong definition of motivation (or something). But what's with the charities that can't f... (read more)

4lessdazed
You think he would make the mistake of thinking there is only one motivation behind each human action?
0TheOtherDave
Just to clarify: consider two competing theories T1 and T2 about what will happen to the Earth's surface after all people have died. You would argue that if T1 is extremely popular prior among living people prior to that time, and T2 is unpopular, then that's all we need to know to conclude that T1 is more true than T2. Further, if all other competing theories are even less popular than T2, then you would argue further that T1 is true and all other competing theories false. What actually happens to the Earth's surface is completely irrelevant to the truth of T1. Have I understood you?
quen_tin-20

I don't make a difference between seeking social gratitude and having a goal. In my view, sincere motivation is positively correlated with seeking social gratitude. You can make an analogy with markets if you want: social gratitude is money, motivation is work. If something is worth doing, it will deserve social gratitude. In my view, the author here appears to be complaining that we are working for money and not for free...

I don't make a difference between having a goal and seeking gratitude for that goal, it's exactly the same for me. Something is important if it deserve a lot of gratitude, something is not if it does not. That's all. The "gratitude" part is intrinsic.

If you accept my view, Kaj Sotala's statement is a nonsense: it can't turn out to be strategic self-deception when we thought we were deeply motivated, we're seeking gratitude from the start (which is precisely what "being deeply motivated" means. If at one point we discover that we've been looking for gratitude all that time, then we don't discover that we've been fooling ourself, we're only beginning to understand the true nature of any goal.

5Kaj_Sotala
Like Wei Dai said - the core problem (at least in my case) wasn't in the prestige-seeking by itself, it was in the cached and incorrect thoughts about what would lead to prestigious results, and the fact that those cached thoughts hijacked the reasoning process. If I had stopped to really think about whether the actions made any sense, I should have realized that such actions wouldn't lead to prestige, they would lead to screwing up (in the first and second example, at least). But instead I just went with the first cliché of prestige that my brain associated with this particular task. If I had actually thought about it, I would have realized that there were better ways of both achieving the goal and getting prestige... but because my mind was so focused on the first cliché of prestige that came up, I didn't want to think about anything that would have suggested I couldn't do it. I subconsciously believed that if I didn't get prestige this way, I wouldn't get it in any other way either, so I pushed away any doubts about it.
quen_tin-30

Let me rephrase.

The assumption that there would exist pure gratitude-free goals is a myth: pursuing such goals would be absurd. (people who seem do perform gratitude-free actions are often religious people: they actually believe in divine gratitude).

Therefore social gratitude is an essential component of any goal and thus it is not correlated with lack of sincere motivation, nor does it "downgrade" the goal to something less important. It's just part of it.

3Vladimir_Nesov
See Fake Selfishness.
2Randolf
I'm afraid you are making a very strong statement with hardly any evidence to support it. You merely claim that people who pursue gratitude-free goals are often religious people (source?) and that such goals are a myth and absurd. (Why?) I for one, don't understand why such a goal would be necessarily absurd.. Also, I can imagine that even if I was the only person in the world, I would still pursue some goals.
1wedrifid
That doesn't follow. Degree of sincerity and degree of social gratitute may well be correlated. The fact that motivations are seldom pure doesn't change that. It just makes the relationship more grey.

I deny that having a goal as a "strategic feature" is incompatible with being sincerly and deeply motivated. That's my point.

More precisely : either one is consciously seeking gratitude, then he/she is cynical, but I think this is rarely the case. Either seeking gratitude is only one aspect of a goal that is sincerly pursued (which means that one wants to deserve that gatiude for real). Then there is no problem, the motivation is there.

7Fleisch
Then you don't talk about the same thing as Kaj Sotala. He talks about all the cases where it seems to you that you are deeply motivated, but the goal turns out to be, or gets turned into nothing beyond strategic self-deception. Your point may be valid, but it is about something else than what his post is about.
2Fleisch
This is either a very obvious rationalization, or you don't understand Kaj Sotalas point, or both. The problem Kaj Sotala described is that people have lots of goals, and important ones too, simply as a strategic feature, and they are not deeply motivated to do something about them. This means that most of us who came together here because we think the world could really be better will with all likelihood not achieve much because we're not deeply motivated to do something about the big problems. Do you really think there's no problem at hand? Then that would mean you don't really care about the big problems.

Once we've imagined the atoms-arranged-chairwise, that's all it is to be a chair. It's analytic. But there's no such conceptual connection between neurons-instantiating-computations and consciousness, which arguably precludes identifying the two.

That's true. The difference between chairs and consciousness is that chair is a 3rd person concept, whereas consciousness is a 1st person concept. Imagining a world without consciousness is easy, because we never know if there are consciousnesses or not in the world - consciousness is not an empirical data, it's something we speculate other have by analogy with ourselves.

Inside your representation, I might be a person, and I do represent myself as a person sometimes.

0Jonathan_Graehl
"... and words will never hurt me" :)
0Jonathan_Graehl
Who cares?

Acid test (1) and (2): this is where dogma starts.

-1Broggly
I get the problem with (2), although mostly because I haven't thought about quantum mechanics enough to have an opinion, but (1) is no more dogma than "DNA is transcribed to mRNA which is then translated as an amino acid sequence". There are lots of good reasons to investigate the actual likelihood of the null and alternative hypotheses rather than just assuming it's about 95% likely it's all just a coincidence Of course, until this becomes fairly standard doing so would mean turning your paper into a meta-analysis as well as the actual experiment, which is probably hard work and fairly boring.

The pb here is that "I believe P" supposes a representation / a model of P. There must be a pre-existing model prior to using Cox's theorem on something. My question is semantic: what does this model lie on? The probabilities you will get will depend on the model you will adopt, and I am pretty sure that there is no definitive model/conception of anything (see the problem of translation analysed by Quine for example).

Ok, it depends what you mean by "information about". My understanding is that we have no information on the nature of reality, which does not mean that we have no information from reality.

0TheAncientGeek
Suggestion: knowledge of what a thing is in itself , is like information that is not coded in any particular scheme.
1Tyrrell_McAllister
I agree that we get information from reality. And I think that we agree that our confidence that we get information from reality is far less murky than our concept of "the nature of reality". Kant, being a product of his times, doesn't seem to think this way, though. Maybe, if you explained the modern information-theoretic notion of "information" to Kant, he would agree that we get information about external reality in that sense. But I don't know. It's hard to imagine what a thinker like Kant would do in an entirely different intellectual environment from the one in which he produced his work. I'm inclined to think that, for Kant, the noumena are something to which it is not even possible to apply the concept of "having information about".
quen_tin-40

I also view philosophy as a meta-science. I think language is relational by nature (e.g. red refer to the strong correlation between our respective experiences of red) and is blind to singularity (I cannot explain by mean of language what it is like for me to see red, I can only give it a name, which you can understand only if my red is correlated to yours - my singular red cannot be expressed).

Since science is a product of language, its horizon is describing the relational framework of existing things, which are unspeakable. That's exactly what science c... (read more)

0Jonathan_Graehl
To your 3 paragraphs: 1: Yes - we assume that words mean the same thing to others when we use them, and it's actually quite tricky to know when you've succeeded in communicating meaning. 2: "with special relativity, space/time referentials are relative to an observer, etc." - this is rather sad and makes me think you're trolling. What does this have to do with language? Nothing. 3: Your belief that we can't describe things in certain ways has you preaching, instead of trying to discover what your interlocutor actually means. "which would make of us an epiphenomenon" - so what? It sounds like you're prepared to derail any conversation by insisting everyone remind themselves that these are PEOPLE saying and thinking these things. Or maybe, more reasonably, you think that everyone ought to have a position about why they aren't constantly saying "I think ...", and you'll only derail when they refuse to admit that they're making an aesthetic choice.
quen_tin-20

I define qualia as the elements of my subjective experience. "That sounds obvious" was an euphemism. It's more than obvious that qualia are real, it's given, it is the only truth that does not need to be proven.

0Jack
Okay... well what does it mean to give meaning to something? My claim is that I am a (really complex) code of sorts and that I interpret things in basically the same way code does. Now it often feels like this description is missing something and that's the problem of consciousness/qualia for which I, like everyone else, have no solution. But "interpretation is a first-person concept" doesn't let us represent humans. You were disputing someone's claim that 'the universe is an algorithm'... why isn't that reason enough to identify one possible disadvantage. Otherwise you're just saying "Na -ahhhh!" I'm really bewildered by this and imagine you must have read someone else and took their position to be mine. I'm a straight forward Quinean ontological relativist which is why I paraphrased the original claim in terms of ideal representation and dropped the 'is'. I was just trying to explain the claim since it didn't seem like you were understanding it- I didn't even make the statement in question (though I do happen to think the algorithm approach is the best thing going, I'm not confident that thats the end of the story). But I think we're bumping up against competing conceptions of what philosophy should be. I think philosophy is a kind of meta-science which expands and clarifies the job of understanding the world. As such, it needs to find a way of describing the subject in the language of scientific representation. This is what the cognitive science end of philosophy is all about. But you want to insist on the subject as fundamental- as far as I'm concerned thats just refusing to let philosophy/science do it's thing.
quen_tin-20

Well my opinion is that the confusion between representation and reality is on your side.

Indeed, a scientific model is a representation of reality - not reality. It can be found inside books or learned at school, it is interpreted. On the contrary, qualia are not represented but directly experienced. They are real.

That sounds obvious. No?

0FAWS
Not at all. What you call "qualia" could be the combination of a mental symbol, the connections and associations this symbol has and various abstract entities. When you experience experiencing such a "quale" the actual symbol might or might not be replaced with a symbol for the symbol, possibly using a set of neural machinery overlapping with the set for the actual symbol (so you can remember or imagine things without causing all of the involuntary reactions the actual experience causes)
0twanvl
quen_tin-20

Your code is a list of characters in a text file, or a list of bytes in your computer's memory. Only you interpret it as a code that interprets something.

0Jack
What does it mean to 'interpret' something? Edit: or rather, what does it mean for me to interpret something, 'cause I know exactly what it means for code to do it.
0twanvl
Do you have some links to this evidence, or studies that come to this conclusion?
2Jack
It may not suggest this to your satisfaction but it certainly suggests it remotely (and the mathematical model involves counterfactual dependencies of qualia, not just correlations). What does it mean to say that the universe is composed of qualia? That sounds like an obvious confusion between representation and reality.

I prefer this formulation, because you emphasize on the representational aspect. Now a representation (a conceptualization) requires someone that conceptualize/represents things. I think that this "useful and clarifying way" just forget that a representation is always relative to a subject. The last part of the sentence only expresses your proud ignorance (sorry)...

0Jack
What proud ignorance? I haven't proudly asserted anything (I'm not among your downvoters). My point is, if you dispute this metaphysics you need to explain what the disadvantages of it are and you haven't done that which is what is frustrating people.
quen_tin-20

I think it does.

{"hello", "world"} is a set of lighted pixels on my screen, or a list of characters in a text file containing source code, or a list of bytes in my computer's memory, but in any case, there must be an observer so that they can be interpreted as a list of string. The real list of string only exists inside my representation.

1Jack
Pretty sure I can write code that makes these same interpretations.
quen_tin-10

The downvote corporatist system of this site is extremely annoying. I am proposing a valid and relevant argument. I expect counter-arguments from people who disagree, not downvotes. Why not keep downvotes for not-argumented/irrelevant comments?

8Tyrrell_McAllister
I'm really curious: What work is the word "corporatist" doing in this sentence? In what sense is the downvote system "corporatist"?
2jimrandomh
The guideline is to upvote things you want to see more of, and downvote things you want to see less of. That leaves room for interpretation about where the two quality thresholds should be, but in practice they're both pretty high and I think that's a good thing. There are a lot of things that could be wrong with a comment besides being irrelevant or not being argued. In this case, I think the problem is arguing one side of a confusing question rather than trying to clarify or dissolve it.
3Vladimir_M
Your above comment could be phrased better (it makes a valid point in a way that can be easily misinterpreted as proposing some mushy-headed subjective relativism), but I agree that people downvoting it are very likely overconfident in their own understanding of the problem. My impression is that the concept of "algorithm" (and "computation" etc.) is dangerously close to being a semantic stop sign on LW. It is definitely often used to underscore a bottom line without concern for its present problematic status.
quen_tin-20

What I mean is that the notion of algorithm is always relative to an observer. Something is an algorithm because someone decides to view it as an algorithm. She/He decides what its inputs are and what its outputs. She/He decides what is the relevant scale for defining what a signal is. All these decisions are arbitrary (say I decide that the text-processing algorithm that runs on my computer extends to my typing fingers and the "calculation" performed by the molecules of them - why not? My hand is part of my computer. Does my computer "feel ... (read more)

1Jack
So I agree that whether or not an observer views something as an algorithm is in fact, contingent. But the claim is that the people and the universe are in fact algorithms. To put it in pragmatic language: representing the universe as an algorithm and it's components as subroutines is a useful and clarifying way of conceptualizing that universe relative to competing views and has no countervailing disadvantages relative to other ways of conceptualizing the universe.
-1quen_tin
The downvote corporatist system of this site is extremely annoying. I am proposing a valid and relevant argument. I expect counter-arguments from people who disagree, not downvotes. Why not keep downvotes for not-argumented/irrelevant comments?
2jimrandomh
"Algorithm" is a type; things can be algorithms in the same sense that 5 is an integer and {"hello","world"} is a list. This does not depend on the observer, or even the existence of an observer.
quen_tin-30

They may not be obviously wrong, but the important point is that it remains a pure metaphysical speculation and that other metaphysical systems exist, and other people even deny that any metaphysical system can ever be "true" (or real or whatever). The last point is rather consensual among modern philosophers: it is commonly assumed that any attempt to build a definitive metaphysical system will necessarily be a failure (because there is no definitive ground on which any concept rests). As a consequence, we have to rely on pragmatism (as you did in a previous comment). But anyway, the important point is that different approaches exist, and none is a definitive answer.

5Jack
You need to actually explain your point and not just keep repeating it.
0Jonathan_Graehl
So you're not a person?
0Jonathan_Graehl
Unless you're making a use-mention distinction (and why would you?), I don't see your point. An algorithm can be realized in a mechanism. Are you saying that he should say "you can be an implementation of an algorithm" instead?
4cousin_it
The question whether algorithms "exist" is related to the larger question of whether mathematical concepts "exist". (The former is a special case of the latter.) Many people on LW take seriously the "mathematical multiverse" ideas of Tegmark and others, which hypothesize that abstract mathematical concepts are actually all that exists. I'm not sure what to think about such ideas, but they're not obviously wrong, because they've been subjected to very harsh criticism from many commenters here, yet they're still standing. The closest I've come to a refutation is the pheasant argument (search for "pheasant" on this site), but it's not as conclusive as I'd like. I think it's very encouraging that we've come to a concrete disagreement at last! ETA: I didn't downvote you, and don't like the fact that you're being downvoted. A concrete disagreement is better than confused rhetoric.
1jimrandomh
No, an algorithm can exist inside another algorithm as a regularity, and evidence suggests that the universe itself is an algorithm.
quen_tin-30

I don't think relying on algorithm solves the issue, because you still need someone to implement and interpret the algorithm.

I agree with your second point: you can take a pragmatist approach. Actually, that's a bit how science work. But still you did not prove in anyway that your model is a complete and definitive description of all there is nor that it can be strictly identifiable with "reality", and Kant's argument remains valid. It would be more correct to say that a scientific model is a relational model (it describes the relations between things as they appear to observers and their regularities).

1cousin_it
You can be the algorithm. The software running in your brain might be "approximately correct by design", a naturally arising approximation to the kind of algorithms I described in previous comments. I cannot examine its workings in detail, but sometimes it seems to obtain correct results and "move in harmony with Bayes" as Eliezer puts it, so it can't be all wrong.
quen_tin-30

My question is what does "happiness" rest upon? A probability of what? You need to have an apriori model oh what hapiness is in order to measure it (that is, a theory of mind), which you have not. Verifying your model depends on your model...

3Tyrrell_McAllister
You argued that "I believe P with probability 0.53" might be as meaningless as "I am 53% happy". It is a valid response to say, "Setting happiness aside, there actually is a rigorous foundation for quantifying belief—namely, Cox's theorem."
0jimrandomh
Cox's theorem. Probability reduces to set measure, which requires nothing but a small set of mathematical axioms.

The nature of logical reasoning is actually a deep philosophical question...

You know what an algorithm is, but do you know if you are an algorithm? I am not sure to understand why you need algorithm at all. Maybe your point is "If you are a human being X that receive an input Y, this allows you to know a nontrivial fact about reality (...)". I tend to agree with that formulation, but again, this supposes some concepts that do not go without saying, and in particular, it supposes a realist approach. Idealist philosophers would disagree.

I can unde... (read more)

1cousin_it
This is why I talked about algorithms. When a human being says "I am a human being", you may quibble about it being "observational" or "apriori" knowledge. But algorithms can actually have apriori knowledge coded in, including knowledge of their own source code. When such an algorithm receives inputs, it can make conclusions that don't rely on "realist" or "idealist" philosophical assumptions in any way, only on coded apriori knowledge and the inputs received. And these conclusions would be correct more or less by definition, because they amount to "if reality contains an instance of algorithm X receiving input Y, then reality contains an instance of algorithm X receiving input Y". Your second paragraph seems to be unrelated to Kant. You just point out that our reasoning is messy and complex, so it's hard to prove trustworthy from first principles. Well, we can still consider it "probably approximately correct" (to borrow a phrase from Leslie Valiant), as jimrandomh suggested. Or maybe skip the step-by-step justifications and directly check your conclusions against the real world, like evolution does. After all, you may not know everything about the internal workings of a car, but you can still drive one to the supermarket. I can relate to the idea that we're still in the "stupid driver" phase, but this doesn't imply the car itself is broken beyond repair.
0jimrandomh
All of those questions have known answers, but you have to take them on one at a time. Most of them go away when you switch from discrete (boolean) reasoning to continuous (probabilistic) reasoning.

If I am a cognitive algorithm X that reveives input Y, I don't necessarily know what an algorithm is, what an input is, and so on. One could argue that all I know is 'Y'. I don't necessarily have any idea of what a "possible reality" is. I might not have a concept of "possibility" nor of "reality".

Your way of thinking presupposes many metaphysical concepts that have been questioned by philosophers, including Kant. I am not saying that this line of reasoning is invalid (I suspect it is a realist approach, which is a fair option). My personal feeling is that Kant is upstream of that line of reasoning.

0cousin_it
But I do know what an algorithm is. Can someone be so Kantian as to distrust even self-contained logical reasoning, not just sensations? In that case how did they come to be a Kantian?
  • That is also what the linked article seems to entail. The statement I quoted, as I understand it, says that every information we have about reality is the result of "some cognitive algorithm" (=the representations that appears (...) provided by our senses)
  • The map is certainly a kind of information about the territory (though we cannot know it with certainty). Strictly speaking, Kant does not say we have no information about reality, he says we cannot know if we have or not.
1Tyrrell_McAllister
I don't think that Kant makes the distinction between "knowing" and "having information about" that you and I would make. If he doesn't outright deny that we have any information about the world beyond our senses, he certainly comes awfully close. On A380, Kant writes, And, on A703/B731, he writes, (Emphasis added. These are from the Guyer–Wood translation.)
1cousin_it
If you are a cognitive algorithm X that receives input Y, this allows you to "know" a nontrivial fact about "reality" (whatever it is): namely, that it contains an instance of algorithm X that receives input Y. The same extends to probabilistic knowledge: if in one "possible reality" most instances of your algorithm receive input Y and in another "possible reality" most of them receive input Z, then upon seeing Y you come to believe that the former "possible reality" is more likely than the latter. This is a straightforward application of LW-style thinking, but it didn't occur to Kant as far as I know.

I agree, and I really doubt philosophers fail to deeply question their own intuitions.

quen_tin-10

In a sense, science is nothing but experimental philosophy (in a broad sense), and the job of non-experimental-philosophy (what we label philosophy) is to make any question become an experimental question... But I would say that philosophy remains important as the framework where science and scientific fundamental concepts (truth, reality, substance) are defined and discussed.

0prase
Not universally. It's hard to find experiments in mathematics.
Load More