If I understand correctly, Gram_Stone is using input here to refer to the organism's short term memory, and reliable algorithm to refer to working memory (unreliable algorithm means no use of working memory), and output to refer to the organism's observable behavior. I'm not entirely certain of how he's distinguishing between input, algorithm, and output in this context, but that's my best guess.
The paragraph is a criticism of certain varieties of "thinking fast, thinking slow" sorts of arguments.
A common problem within psychology research is that researchers will conduct a study in which the independent variable is a stimulus in the external environment and the dependent variable is the organism's behavior, but then the researcher will make a claim about the underlying cognitive process without any evidence to make that claim. Without collecting data on the process, such studies are frequently not able to distinguish between different parts of the cognitive process.
So it is not possible to determine from examining an unreliable output whether that lack of reliability was due to an unreliable input or due to an unreliable algorithm.
The key thing to note is that the input in that sentence is not referring to the independent variable. The independent variable is being manipulated in a step before the input.
This was really helpful though. Dual-processing theory has always come across to me as being all over the map in terms of the definition of what is type 1 and what is type 2.
short term memory
It could be long-term memory too.
This was really helpful though. Dual-processing theory has always come across to me as being all over the map in terms of the definition of what is type 1 and what is type 2.
Thanks. This is because anyone can call their theory a dual process theory. And it's an even more general term in all of psychology, all LWers are secretly talking about the enormous subset of dual process theories of reasoning, which is why I made the title what it was. And about ten years ago they were in the 'listing common ch...
(This is mostly a summary of Evans (2012); the fifth misconception mentioned is original research, although I have high confidence in it.)
It seems that dual process theories of reasoning are often underspecified, so I will review some common misconceptions about these theories in order to ensure that everyone's beliefs about them are compatible. Briefly, the key distinction (and it seems, the distinction that implies the fewest assumptions) is the amount of demand that a given process places on working memory.
(And if you imagine what you actually use working memory for, then a consequence of this is that Type 2 processing always has a quality of 'cognitive decoupling' or 'counterfactual reasoning' or 'imagining of ways that things could be different', dynamically changing representations that remain static in Type 1 processing; the difference between a cached and non-cached thought, if you will. When you are transforming a Rubik's cube in working memory so that you don't have to transform it physically, this is an example of the kind of thing that I'm talking about from the outside.)
The first common confusion is that Type 1 and Type 2 refer to specific algorithms or systems within the human brain. It is a much stronger proposition, and not a widely accepted one, to assert that the two types of cognition refer to particular systems or algorithms within the human brain, as opposed to particular properties of information processing that we may identify with many different algorithms in the brain, characterized by the degree to which they place a demand on working memory.
The second and third common confusions, and perhaps the most widespread, are the assumptions that Type 1 processes and Type 2 processes can be reliably distinguished, if not defined, by their speed and/or accuracy. The easiest way to reject this is to say that the mistake of entering a quickly retrieved, unreliable input into a deliberative, reliable algorithm is not the same mistake as entering a quickly retrieved, reliable input into a deliberative, unreliable algorithm. To make a deliberative judgment based on a mere unreliable feeling is a different mistake from experiencing a reliable feeling and arriving at an incorrect conclusion through an error in deliberative judgment. It also seems easier to argue about the semantics of the 'inputs', 'outputs', and 'accuracy' of algorithms running on wetware, than it is to argue about the semantics of their demand on working memory and the life outcomes of the brains that execute them.
The fourth common confusion is that Type 1 processes involve 'intuitions' or 'naivety' and Type 2 processes involve thought about abstract concepts. You might describe a fast-and-loose rule that you made up as a 'heuristic' and naively think that it is thus a 'System 1 process', but it would still be the case that you invented that rule by deliberative means, and thus by means of a Type 2 process. When you applied the rule in the future it would be by means of a deliberative process that placed a demand on working memory, not by some behavior that is based on association or procedural memory, as if by habit. (Which is also not the same as making an association or performing a procedure that entails you choosing to use the deliberative rule, or finding a way to produce the same behavior that the deliberative rule originally produced by developing some sort of habit or procedural skill.) When facing novel situations, it is often the case that one must forego association and procedure and thus use Type 2 processes, and this can make it appear as though the key distinction is abstractness, but this is only because there are often no clear associations to be made or procedures to be performed in novel situations. Abstractness is not a necessary condition for Type 2 processes.
The fifth common confusion is that, although language is often involved in Type 2 processing, this is likely a mere correlate of the processes by which we store and manipulate information in working memory, and not the defining characteristic per se. To elaborate, we are widely believed to store and manipulate auditory information in working memory by means of a 'phonological store' and an 'articulatory loop', and to store and manipulate visual information by means of a 'visuospatial sketchpad', so we may also consider the storage and processing in working memory of non-linguistic information in auditory or visuospatial form, such as musical tones, or mathematical symbols, or the possible transformations of a Rubik's cube, for example. The linguistic quality of much of the information that we store and manipulate in working memory is probably noncentral to a general account of the nature of Type 2 processes. Conversely, it is obvious that the production and comprehension of language is often an associative or procedural process, not a deliberative one. Otherwise you still might be parsing the first sentence of this article.