But, an important question still needs to be asked. Is sensory perception and how its input gets organised in our minds the sole basis of our internal representations of the world or is there something else that might placate any creeping errors from perception?
Yes, there is: action. Organisms learn about the world not merely by sitting there looking, but by acting and observing the results.
Action still depends on perception. I would agree that there is more, e.g. proprioception, but at the end of the day you are still just learning about the environment by interacting with it and analysing the feedback. This interaction can be action or it can be simply looking. I agree with your overall point, though, if you want to know something about the territory, you actually have to interact with it. You can't rely on thought experiments or your maps as they can be faulty and the territory could have changed since you last interacted with it and built up your models of it.
Scott Aaronson once posed the question in his blog, of whether an entity with only the power to observe, not to act, could discover causal relationships. Despite the eminence of some of the people commenting there, it seems to me that they came up with little of substance.
I think we have to first establish what it means to learn a causal relationship (e.g. we want both "true" and "justified" and agree on the "justified" part), and decide on how we are going to formalize causality. If we are Popperian that would mean we can test it. If we don't believe in an adversarial universe, and like minimum description length or something, then we don't have to test, but we then have to justify our optimism.
We could formalize causation via interventions (as I would), or via something else.
If we are interventionist Popperians, then from the point of view of the entity itself, the answer is "no" (because causal relationships are defined by acting, if we are interventionists, and the entity can never act to verify what it learned as genuinely causal, nor can it verify the assumptions it needs to equate observational constraints with causation). If we are interventionist optimists, the entity could perhaps construct an MDL story to make itself believe it learned a causal relationship. But if it can't act, it just feels like a mental game it's playing with itself, not genuine knowledge.
From the point of view of some hypothetical omniscient being that programmed the entity's universe, the answer could very well be "yes", regardless (if, for example the entity is running some big data version of the FCI algorithm, or something like that). The entity will run into certain limits on what is possible, but there exist universes we could program where the entity could extract information about causal structure via conditional independence tests (but only given some assumptions the entity itself cannot test, but that the omniscient universe programmers know happen to hold).
If we are not interventionists, then I don't know. The answer may depend on how you formalize non-interventionist causality.
So: given the way I think about what it means to learn a causal relationship (I am a Popperian interventionist), the answer is: "no from the point of view of the entity itself, but possibly yes from the God's eye view of the creators of the entity's universe."
If we are interventionist
...
If we are not interventionists
The question Scott originally posed is whether we must intervene to discover causes -- that is, whether interventionism (we must) or non-interventionism (we need not) is correct.
The answer may depend on how you formalize non-interventionist causality.
Is there such a formalism? Even with formalisations of both, that would just define the two positions more precisely. There is still the question of which of the two describes the world.
The question Scott originally posed is whether we must intervene to discover causes -- that is, whether interventionism (we must) or non-interventionism (we need not) is correct.
I disagree with your interpretation of Scott's question (that is I agree, with the first, but not the second part of your sentence). But Scott is aware of this thread, he may pipe up himself!
There are two ways interventions come up: definitionally and operationally. We may choose to define causation via interventions or not, and we may need to resort to interventions (or not..) to discover causation. A ton of modern causal inference is precisely about finding ways of avoiding having to resort to interventions while still trying to get at causation defined via interventions.
I also think a big part of Scott's question is about justifying what you believe.
Is there such a formalism?
Sure. Hume himself had like 3-4 definitions on his own (including his "proto-counterfactual" definition, which is fairly close to how I think about it).
There is still the question of which of the two describes the world.
I am not sure if causality is in the territory or not. I used to think "no," but now I am not sure.
The question Scott originally posed is whether we must intervene to discover causes -- that is, whether interventionism (we must) or non-interventionism (we need not) is correct.
I disagree with your interpretation of Scott's question (that is I agree, with the first, but not the second part of your sentence). But Scott is aware of this thread, he may pipe up himself!
I hadn't noticed that the OP is also called Scott. The Scott I intended to refer to there was Aaronson. ScottL's question doesn't mention causation, but is more general: is perception alone enough to learn everything that we learn? To which I am suggesting that no, action is also required, although I don't have a rigorous argument to that effect.
This is how humans learn causality.
I meant Aaronson, also.
Human babies may learn causality from observations, but:
(a) In this scenario the baby is the armchairian, and the parents are the omniscient Universe programmers. The parents know the baby is right in some cases, but how does the baby itself know it is right, without actually pushing things around. Armchairians aren't allowed to push things around, either to learn or to verify what they learned.
(b) I think ScottL's question is also about justifying your beliefs (I don't think it is such an easy problem). But I am (perhaps naturally) more interested in Scott Aaronson's question.
The parents know the baby is right in some cases, but how does the baby itself know it is right, without actually pushing things around.
The baby is pushing things around, not being an Armchairian. That's what I intended as the point of the video. Causality is one of the earliest things we learn, even before walking and talking, and we learn it by acting and perceiving the effects.
The question Scott originally posed is whether we must intervene to discover causes -- that is, whether interventionism (we must) or non-interventionism (we need not) is correct.
I don't know about all that.. My point was that we don't know anything apriori. That is everything that what we do know has a basis in interaction with the world or is from inference from premises. So, if we know something then at its root this knowledge must have been learnt from us or our ancestors interacting with the world and percieving the feedback. Even observation is interaction, see observer effect. The reason that we can only learn from intervention is that we cannot observe without intervention. We are not observers of the universe, but are part of the universe. It is impossible for us not to intervene when observing or learning about how the world is. We can, however, lessen the impact from our interventions to some degree.
I disagree with your interpretation of Scott's question (that is I agree, with the first, but not the second part of your sentence). But Scott is aware of this thread, he may pipe up himself!
You guys seem to be asking a different question. Namely, whether an entity with only the power to observe, not to act, could discover causal relationships. I don't really know the exact meaning of what it means to observe only without acting or what it takes to discover a causal relationship.
Maybe, it would help if you try thinking about at what level of intervention you can no longer discover causal relationships. At what point in the below points, assuming you trust the results and instruments etc, can you no longer discover causal relationships?
At what point in the below points, assuming you trust the results and instruments etc, can you no longer discover causal relationships?
For the first three, clearly you can. The fourth is the tricky case. The difference between the third and the fourth is that in the third, the "someone" has already learned about causation, so when they read about what was done, it is as good as having done it themselves. At least, they will understand the causal relationships claimed, even if the paper does not contain enough detail for them to replicate it.
In the fourth case, the Armchairian (Scott Aa's name for them) has never interacted with the world, only watched it as if on a television screen or through a read-only internet connection. One can consider two different versions of the Armchairians, depending on whether they have the power to direct their gaze wherever they choose or not (or whether they have the power to type in URLs or not), but in either case it is not clear to me what the answer is.
To give a more concrete example than the parable of the Armchairians, which could be run as a practical causal analysis challenge, could a program whose only input was the Facebook firehose discover causal relationships in the data?
A good read, though I found it rather bland (talking about writing style). I did not read the original article, but compression seems ok. More will be appreciated.
(The below notes are pretty much my attempt to summarise the content in this sample chapter from this book. I am posting this in discussion because I don’t think I will get the time/be bothered enough to improve upon this, so I am posting it now and hope someone finds it interesting or useful. If you do find it interesting check out the full chapter, which goes into more detail)
We don’t experience the world directly, but instead we experience it through a series of “filters” that we call our senses. We know that this is true because of cases of sensory loss. An example is Jonathan I., a 65-year-old New Yorker painter who following an automobile accident suffered from cerebral achromatopsia as well as the loss of the ability to remember and to imagine colours. He would look at a tomato and instead of seeing colours like red or green would instead see only black and shades of grey. The problem was not that Johnathan's eyes no longer worked it was that his brain was unable to process the neural messages for colour.
To understand why Johnathan cannot see colour, we first have to realise that incoming light travels only as far as the back of the eyes. There the information it contains is converted into neural messages in a process called transduction. We call these neural messages: "sensations". These sensations only involve neural representations of stimuli, not the actual stimuli themselves. Sensations such as “red” or “sweet” or “cold” can be said to have been made by the brain. They also only occur when the neural signal reaches the cerebral cortex. They do not occur when you first interact with the stimuli. To us, the process seems so immediate and direct that we are often fooled into thinking that the sensation of "red" is a characteristic of tomato or that the sensation of “cold” is a characteristic of ice cream. But they are not. What we sense is an electrochemical rendition of the world created by our brain and sensory receptors.
There is another separation between reality as it is and how we sense it to be as well. Organisms can only sense some types of stimulus between certain ranges. This is called the absolute threshold for different types of stimulation and it is the minimum amount of physical energy needed to produce a sensory experience. It should be noted that a faint stimulus does not abruptly become detectable as its intensity increases. There is instead a fuzzy boundary between detection and non-detection, which means that a person’s absolute threshold is in fact not absolute at all. Instead, it varies continually with our mental alertness and physical condition.
To understand the reasons why the thresholds vary, we can turn to the signal detection theory. According to the signal detection theory, sensation depends on the characteristics of the stimulus, the background stimulation and the detector (the brain). Signal detection theory says that the background stimulation makes it less likely, for example, for you to hear someone calling your name on a busy downtown street than in a quiet park. The signal detection theory also tells us that your ability to hear them would depend on the condition of your brain, i.e. detector, and, perhaps, whether it has been aroused by a strong cup of coffee or dulled by drugs or lack of sleep.
The thresholds also change as similar stimuli are continued. This is called sensory adaption and it refers to the diminishing responsiveness of sensory systems to prolonged stimulation. An example of this would be when you adapt to the feeling of swimming in cold water. Unchanging stimulation generally shifts to the back of people's awareness, whereas, intense or changing stimulation will immediately draw your attention.
So far, we have talked about how the sensory organs filter incoming stimuli and how they can only pick up certain types of stimuli. But, there is also something more. We don’t just sense the world; we perceive it as well. The brain in a process called perception combines sensations with memories, motives and emotions to create a representation of the world that fits our current concerns and interests. In essence, we impose our own meanings on sensory experience. People obviously have different memories, motives and current emotional states and this means that we attach different meanings to our sensations i.e. we have perceptual differences. Two people can look at the same political party or religion and come to starkly different conclusions about them.
The below picture provides a summary of the whole process discussed so far (stimulation to perception):
From simulation to perception, there are a great number of chances for errors to creep in and for you to either misperceive or even not perceive some types stimuli at all. These errors are often exacerbated by mistakes made by the brain. The brain, while brilliant and complex, is not perfect. Some of the mistakes it can make include perceptual phenomena such as: illusions, constancies, change blindness, and inattentional blindness. Illusions, for example, are when your mind deceives you by interpreting a stimulus pattern incorrectly. There are also instances of ambiguity in which some people see a particular colour and others another. This occurs even with people who are not colour blind. It occurs because the brain strives for colour constancy which is seeing the same object as having the same colour under varying illumination conditions. But, this process of colour constancy is not perfect. It is troubling that despite all we know about sensation and perception many people still uncritically accept the evidence of their senses and perceptions at face value.
Another important aspect of perception is that the different types of sensory stimuli, e.g. hearing and vision, need to be integrated. This process of sensory integration can be another source of perceptual phenomenon. An example of this is the McGurk effect in which the auditory component of one sound is paired with the visual component of another sound. This leads to an illusion, i.e. the perception of a third sound which is not actually spoken. You have to really see (or hear) this in action to understand it, so take a look at this short video which demonstrates the effect.
That was a quick summary on perception. But, an important question still needs to be asked. Is sensory perception and how its input gets organised in our minds the sole basis of our internal representations of the world or is there something else that might placate any creeping errors from perception? This question was asked by many philosophers. Kant in particular, had a distinction between a priori concepts (things that we know before any experience) and a posteriori concepts (things that we know only from experience). He pointed out that there are some things that we can’t know from experience and instead need to be born with them. The work of Konrad Lorenz, though, pointed out that Kant’s a priori were really evolutionary a posteriori concepts. That is we didn’t learn them, but our ancestors did. We might believe X despite not having seen it with our own eyes, but this is only because our ancestors who believed X survived. If we couldn’t navigate the world because our internal representations of the world were too distant from how the world actually is, then we would have been less likely to survive and reproduce. What this means is that we can have a priori concepts i.e. innate knowledge. But, that this innate knowledge is itself based on sensory perceptions of the world, just not yours. The types of a priori knowledge can be differentiated into the naturalistic a priori and the inference-from-premises a priori.