An accidental experiment in location memory
I bought a plastic mat to put underneath my desk chair, to protect the wooden floor from having bits of stone ground into it by the chair wheels. But it kept sliding when I stepped onto it, nearly sending me stumbling into my large, expensive, and fragile monitor. I decided to replace the mat as soon as I found a better one.
Before I found a better one, though, I realized I wasn't sliding on it anymore. My footsteps had adjusted themselves to it.
My mind must be too highly trained
I've played various musical instruments for nearly 40 years now, but some simple things remain beyond my grasp. Most frustrating is sight reading while playing piano. Though I've tried for years, I can't read bass and treble clef at the same time. To sight-read piano music, when you see this:

you need your right hand to read it as C D E F, but your left hand to read it as E F G A. To this day, I can't do it, and I can only learn piano music by learning the treble and bass clef parts separately to the point where I don't rely on the score for more than reminders, then playing them together.
The Limits of My Rationality
As requested here is an introductory abstract.
The search for bias in the linguistic representations of our cognitive processes serves several purposes in this community. By pruning irrational thoughts, we can potentially effect each other in complex ways. Leaning heavy on cognitivist pedagogy, this essay represents my subjective experience trying to reconcile a perceived conflict between the rhetorical goals of the community and the absence of a generative, organic conceptualization of rationality.
The Story
Though I've only been here a short time, I find myself fascinated by this discourse community. To discover a group of individuals bound together under the common goal of applied rationality has been an experience that has enriched my life significantly. So please understand, I do not mean to insult by what I am about to say, merely to encourage a somewhat more constructive approach to what I understand as the goal of this community: to apply collectively reinforced notions of rational thought to all areas of life.
As I followed the links and read the articles on the homepage, I found myself somewhat disturbed by the juxtaposition of these highly specific definitions of biases to the narrative structures of parables providing examples in which a bias results in an incorrect conclusion. At first, I thought that perhaps my emotional reaction stemmed from rejecting the unfamiliar; naturally, I decided to learn more about the situation.
As I read on, my interests drifted from the rhetorical structure of each article (if anyone is interested I might pursue an analysis of rhetoric further though I'm not sure I see a pressing need for this), towards the mystery of how others in the community apply the lessons contained therein. My belief was that the parables would cause most readers to form a negative association of the bias with an undesirable outcome.
Even a quick skim of the discussions taking place on this site will reveal energetic debate on a variety of topics of potential importance, peppered heavily with accusations of bias. At this point, I noticed the comments that seem to get voted up are ones that are thoughtfully composed, well informed, soundly conceptualized and appropriately referential. Generally, this is true of the articles as well, and so it should be in productive discourse communities. Though I thought it prudent to not read every conversation in absolute detail, I also noticed that the most participated in lines of reasoning were far more rhetorically complex than the parables' portrayal of bias alone could explain. Sure the establishment of bias still seemed to represent the most commonly used rhetorical device on the forums ...
At this point, I had been following a very interesting discussion on this site about politics. I typically have little or no interest in political theory, but "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) seemed so out of place in a community whose political affiliations might best be summarized the phrase "politics is the mind killer" that I couldn't help but investigate. More specifically, I was trying to figure out why it had been posted here at all (I didn't take issue with either the scholarship or intent of the article, but the latter wasn't obvious to me, perhaps because I was completely unfamiliar with the coinage "neoreactionary").
On my third read, I made a connection to an essay about the socio-historical foundations of rhetoric. In structure, the essay progressed through a wide variety of specific observations on both theory and practice of rhetoric in classical Europe, culminating in a well argued but very unwieldy thesis; at some point in the middle of the essay, I recall a paragraph that begins with the assertion that every statement has political dimensions. I conveyed this idea as eloquently as I could muster, and received a fair bit of karma for it. And to think that it all began with a vague uncomfortable feeling and a desire to understand!
The Lesson
So you are probably wondering what any of this has to do with rationality, cognition, or the promise of some deeply insightful transformative advice mentioned in the first paragraph. Very good.
Cognition, a prerequisite for rationality, is a complex process; cognition can be described as the process by which ideas form, interact and evolve. Notice that this definition alone cannot explain how concepts like rationality form, why ideas form or how they should interact to produce intelligence. That specific shortcoming has long crippled cognitivist pedagogies in many disciplines -- no matter which factors you believe to determine intelligence, it is undeniably true that the process by which it occurs organically is not well-understood.
More intricate models of cognition traditionally vary according to the sets of behavior they seek to explain; in general, this forum seems to concern itself with the wider sets of human behavior, with a strange affinity for statistical analysis. It also seems as if most of the people here associate agency with intelligence, though this should be regarded as unsubstantiated anecdote; I have little interest in what people believe, but those beliefs can have interesting consequences. In general, good models of cognition that yield a sense of agency have to be able to explain how a mushy organic collection of cells might become capable of generating a sense of identity. For this reason, our discussion of cognition will treat intelligence as a confluence of passive processes that lead to an approximation of agency.
Who are we? What is intelligence? To answer these or any natural language questions we first search for stored-solutions to whatever we perceive as the problem, even as we generate our conception of the question as a set of abstract problems from interactions between memories. In the absence of recognizing a pattern that triggers a stored solution, a new solution is generated by processes of association and abstraction. This process may be central to the generation of every rational and irrational thought a human will ever have. I would argue that the phenomenon of agency approximates an answer to the question: "who am I?" and that any discussion of consciousness should at least acknowledge how critical natural language use is to universal agreement on any matter. I will gladly discuss this matter further and in greater detail if asked.
At this point, I feel compelled to mention that my initial motivation for pursuing this line of reasoning stems from the realization that this community discusses rationality in a way that differs somewhat from my past encounters with the word.
Out there, it is commonly believed that rationality develops (in hindsight) to explain the subjective experience of cognition; here we assert a fundamental difference between rationality and this other concept called rationalization. I do not see the utility of this distinction, nor have I found a satisfying explanation of how this distinction operates within accepted models for human learning in such a way that does not assume an a priori method of sorting the values which determine what is considered "rational". Thus we find there is a general derth of generative models of rational cognition beside a plethora of techniques for spotting irrational or biased methods of thinking.
I see a lot of discussion on the forums very concerned with objective predictions of the future wherein it seems as if rationality (often of a highly probabilistic nature) is, in many cases, expected to bridge the gap between the worlds we can imagine to be possible and our many somewhat subjective realities. And the force keeping these discussions from splintering off into unproductive pissing about is a constant search for bias.
I know I'm not going to be the first among us to suggest that the search for bias is not truly synonymous with rationality, but I would like to clarify before concluding. Searching for bias in cognitive processes can be a very productive way to spend one's waking hours, and it is a critical element to structuring the subjective world of cognition in such a way that allows abstraction to yield the kind of useful rules that comprise rationality. But it is not, at its core, a generative process.
Let us consider the cognitive process of association (when beliefs, memories, stimuli or concepts become connected to form more complex structures). Without that period of extremely associative and biased cognition experienced during early childhood, we might never learn to attribute the perceived cause of a burn to a hot stove. Without concepts like better and worse to shape our young minds, I imagine many of us would simply lack the attention span to learn about ethics. And what about all the biases that make parables an effective way of conveying information? After all, the strength of a rhetorical argument is in it's appeal to the interpretive biases of it's intended audience and not the relative consistency of the conceptual foundations of that argument.
We need to shift discussions involving bias towards models of cognition more complex than portraying it as simply an obstacle to rationality. In my conception of reality, recognizing the existence of bias seems to play a critical role in the development of more complex methods of abstraction; indeed, biases are an intrinsic side effect of the generative grouping of observations that is the core of Bayesian reasoning.
In short, biases are not generative processes. Discussions of bias are not necessarily useful, rational or intelligent. A deeper understanding of the nature of intelligence requires conceptualizations that embrace the organic truths at the core of sentience; we must be able to describe our concepts of intelligence, our "rationality", such that it can emerge organically as the generative processes at the core of cognition.
The Idea
I'd be interested to hear some thoughts about how we might grow to recognize our own biases as necessary to the formative stages of abstraction alongside learning to collectively search for and eliminate biases from our decision making processes. The human mind is limited and while most discussions in natural language never come close to pressing us to those limits, our limitations can still be relevant to those discussions as well as to discussions of artificial intelligences. The way I see things, a bias free machine possessing a model of our own cognition would either have to have stored solutions for every situation it could encounter or methods of generating stored solutions for all future perceived problems (both of which sound like descriptions of oracles to me, though the latter seems more viable from a programmer's perspective).
A machine capable of making the kinds of decisions considered "easy" for humans, might need biases at some point during it's journey to the complex and self consistent methods of decision making associated with rationality. This is a rhetorically complex community, but at the risk of my reach exceeding my grasp, I would be interested in seeing an examination of the Affect Heuristic in human decision making as an allegory for the historic utility of fuzzy values in chess AI.
Thank you for your time, and I look forward to what I can only hope will be challenging and thoughtful responses.
Talking to yourself: A useful thinking tool that seems understudied and underdiscussed
I have returned from a particularly fruitful Google search, with unexpected results.
My question was simple. I was pretty sure that talking to myself aloud makes me temporarily better at solving problems that need a lot of working memory. It is a thinking tool that I find to be of great value, and that I imagine would be of interest to anyone who'd like to optimize their problem solving. I just wanted to collect some evidence on that, make sure I'm not deluding myself, and possibly learn how to enhance the effect.
This might be just lousy Googling on my part, but the evidence is surprisingly unclear and disorganized. There are at least three seperate Wiki pages for it. They don't link to each other. Instead they present the distinct models of three seperate fields: autocommunication in communication studies, semiotics and other cultural studies, intrapersonal communication ("self-talk" redirects here) in anthropology and (older) psychology and private speech in developmental psychology. The first is useless for my purpose, the second mentions "may increase concentration and retention" with no source, the third confirms my suspicion that this behavior boosts memory, motivation and creativity, but it only talks about children.
Google Scholar yields lots of sports-related results for "self-talk" because it can apparently improve the performance of athletes and if there's something that obviously needs the optimization power of psychology departments, it is competitive sports. For "intrapersonal communication" it has papers indicating it helps in language acquisition and in dealing with social anxiety. Both are dwarfed by the results for "private speech", which again focus on children. There's very little on "autocommunication" and what is there has nothing to do with the functioning of individual minds.
So there's a bunch of converging pieces of evidence supporting the usefulness of this behavior, but they're from several seperate fields that don't seem to have noticed each other very much. How often do you find that?
Let me quickly list a few ways that I find it plausible to imagine talking to yourself could enhance rational thought.
- It taps the phonological loop, a distinct part of working memory that might otherwise sit idle in non-auditory tasks. More memory is always better, right?
- Auditory information is retained more easily, so making thoughts auditory helps remember them later.
- It lets you commit to thoughts, and build upon them, in a way that is more powerful (and slower) than unspoken thought while less powerful (but quicker) than action. (I don't have a good online source for this one, but Inside Jokes should convince you, and has lots of new cognitive science to boot.)
- System 1 does seem to understand language, especially if it does not use complex grammar - so this might be a useful way for results of System 2 reasoning to be propagated. Compare affirmations. Anecdotally, whenever I'm starting a complex task, I find stating my intent out loud makes a huge difference in how well the various submodules of my mind cooperate.
- It lets separate parts of your mind communicate in a fairly natural fashion, slows each of them down to the speed of your tongue and makes them not interrupt each other so much. (This is being used as a psychotherapy method.) In effect, your mouth becomes a kind of talking stick in their discussion.
All told, if you're talking to yourself you should be more able to solve complex problems than somebody of your IQ who doesn't, although somebody of your IQ with a pen and a piece of paper should still outthink both of you.
Given all that, I'm surprised this doesn't appear to have been discussed on LessWrong. Honesty: Beyond Internal Truth comes close but goes past it. Again, this might be me failing to use a search engine, but I think this is worth more of our attention that it has gotten so far.
I'm now almost certain talking to myself is useful, and I already find hindsight bias trying to convince me I've always been so sure. But I wasn't - I was suspicious because talking to yourself is an early warning sign of schizophrenia, and is frequent in dementia. But in those cases, it might simply be an autoregulatory response to failing working memory, not a pathogenetic element. After all, its memory enhancing effect is what the developmental psychologists say the kids use it for. I do expect social stigma, which is why I avoid talking to myself when around uninvolved or unsympathetic people, but my solving of complex problems tends to happen away from those anyway so that hasn't been an issue really.
So, what do you think? Useful?
Publication: the "anti-science" trope is culturally polarizing and makes people distrust scientists
Paper by the Cultural Cognition Project: The culturally polarizing effect of the "anti-science trope" on vaccine risk perceptions
This is a great paper (indeed, I think many at LW would find the whole site enjoyable). I'll try to summarize it here.
Background: The pro/anti vaccine debate has been hot recently. Many pro-vaccine people often say, "The science is strong, the benefits are obvious, the risks are negligible; if you're anti-vaccine then you're anti-science".
Methods: They showed experimental subjects an article basically saying the above.
Results: When reading such an article, a large number of people did not trust vaccines more, but rather, trusted the American Academy of Pediatrics less.
My thoughts: I will strive to avoid labeling anybody as being "anti-science" or "simply or willfully ignorant of current research", etc., even when speaking of hypothetical 3rd parties on my facebook wall. This holds for evolution, global warming, vaccines, etc.
///
Also included in the article: references to other research that shows that evolution and global warming debates have already polarized people into distrusting scientists, and evidence that people are not yet polarized over the vaccine issue.
If you intend to read the article yourself: I found it difficult to understand how the authors divided participants into the 4 quadrants (α, ß, etc.) I will quote my friend, who explained it for me:
I was helped by following the link to where they first introduce that model.
The people in the top left (α) worry about risks to public safety, such as global warming. The people in the bottom right (δ) worry about socially deviant behaviors, such as could be caused by the legalization of marijuana.
People in the top right (β) worry about both public safety risks and deviant behaviors, and people in the bottom left (γ) don't really worry about either.
Review of studies says you can decrease motivated cognition through self-affirmation
I read this article today and thought LW might find it interesting. The key finding is that in a number of different experiments, simple "self-affirmations" (such as writing about relationships with your friends or something else that makes you feel good about yourself) make people more open to changing their mind in cases where changing their mind would be damaging to their self-image. The proposed explanation is that people need to maintain a certain level of self-worth, and one way they do that is by refusing to accept evidence that would damage their sense of self-worth. But if they have a high enough sense of self-worth, they are less likely to do this. I haven't reviewed any of these studies personally, but the idea makes some sense and sounds pretty easy to try. Hat tip to Dan Keys for putting me onto the idea. I searched LW for "Sherman self-affirmation" and didn't see this discussed anywhere on LW, but I didn't look very hard.
Title: Accepting Threatening Information: Self–Affirmation and the Reduction of Defensive Biases
Authors: David K. Sherman and Geoffrey L. Cohen
Citation details: Current Directions in Psychological Science August 2002 vol. 11 no. 4 119-123
Abstract: Why do people resist evidence that challenges the validity of long–held beliefs? And why do they persist in maladaptive behavior even when persuasive information or personal experience recommends change? We argue that such defensive tendencies are driven, in large part, by a fundamental motivation to protect the perceived worth and integrity of the self. Studies of social–political debate, health–risk assessment, and responses to team victory or defeat have shown that people respond to information in a less defensive and more open–minded manner when their self–worth is buttressed by an affirmation of an alternative source of identity. Self–affirmed individuals are more likely to accept information that they would otherwise view as threatening, and subsequently to change their beliefs and even their behavior in a desirable fashion. Defensive biases have an adaptive function for maintaining self–worth, but maladaptive consequences for promoting change and reducing social conflict.
Key quotes: "Pro-choice partisans and pro-life partisans were presented with a debate between two activists on opposite sides of the abortion dispute….However, this confirmation bias was sharply attenuated among participants who affirmed a valued source of self-worth (by writing about a personally important value, such as their relations with friends)....although all participants left the debate feeling more confident in their beliefs about abortion than they had before, this polarization in attitude was significantly reduced among self-affirmed participants (cf. Lord et al., 1979)." p. 120
"In one study (Cohen et al., 2000), devout opponents and proponents of capital punishment were presented with a persuasive scientific report that contradicted their beliefs about the death penalty’s effectiveness as a deterrent for crime....the responses of participants who received an affirmation of a valued self-identity (by writing about a personally important value, or by being provided with positive feedback on an important skill) proved more favorable.Self affirmed participants were less critical of the reported research, they suspected less bias on the part of the authors, and they even changed their overall attitudes toward capital punishment in the direction of the report they read." p. 121
"In one study, athletes who had just completed an intramural volleyball game assessed the extent to which each of a series of factors contributed to their team’s victory or defeat. As in past research (Lau & Russell, 1980),winners made more internal attributions for their victories than losers did for their defeats. However, among athletes who had reflected on an important value irrelevant to athletics, this self-serving bias was attenuated." p. 122
Baseline of my opinion on LW topics
To avoid repeatly saying the same I'd like to state my opinion on a few topics I expect to be relevant to my future posts here.
You can take it as a baseline or reference for these topics. I do not plan to go into any detail here. I will not state all my reasons or sources. You may ask for separate posts if you are interested. This is really only to provide a context for my comments and posts elsewhere.
If you google me you may find some of my old (but not that off the mark) posts about these position e.g. here:
http://grault.net/adjunct/index.cgi?GunnarZarncke/MyWorldView
Now my position on LW topics.
The Simulation Argument and The Great Filter
On The Simulation Argument I definitely go for
"(1) the human species is very likely to go extinct before reaching a “posthuman” stage"
Correspondingly on The Great Filter I go for failure to reach
"9. Colonization explosion".
This is not because I think that humanity is going to self-annihilate soon (though this is a possibility). Instead I hope that humanity will earlier or later come to terms with its planet. My utopia could be like that of the Pacifists (a short story in Analog 5).
Why? Because of essential complexity limits.
This falls into the same range as "It is too expensive to spread physically throughout the galaxy". I know that negative proofs about engineering are notoriously wrong - but that is currently my best guess. Simplified one could say that the low hanging fruits have been taken. I have lots of empirical evidence of this on multiple levels to support this view.
Correspondingly there is no singularity because progress is not limited by raw thinking speed but by effective aggregate thinking speed and physical feedback.
What could prove me wrong?
If a serious discussion would ruin my well-prepared arguments and evidence to shreds (quite possible).
At the very high end a singularity might be possible if a way could be found to simulate physics faster than physics itself.
AI
Basically I don't have the least problem with artificial intelligence or artificial emotioon being possible. Philosophical note: I don't care on what substrate my consciousness runs. Maybe I am simulated.
I think strong AI is quite possible and maybe not that far away.
But I also don't think that this will bring the singularity because of the complexity limits mentioned above. Strong AI will speed up some cognitive tasks with compound interest - but only until the physical feedback level is reached. Or a social feedback level is reached if AI should be designed to be so.
One temporary dystopia that I see is that cognitive tasks are out-sourced to AI and a new round of unemployment drives humans into depression.
- A simplified layered model of the brain; deep learning applied to free inputs (I cancelled this when it became clear that it was too simple and low level and thus computationally inefficient)
- A nested semantic graph approach with propagation of symbol patterns representing thought (only concept; not realized)
I'd really like to try a 'synthesis' of these where microstructure-of-cognition like activation patterns of multiple deep learning networks are combined with a specialized language and pragmatics structure acquisition model a la Unsupervised learning of natural languages. See my opinion on cognition below for more in this line.
What could prove me wrong?
On the low success end if it takes longer than I think it would take me given unlimited funding.
On the high end if I'm wrong with the complexity limits mentioned above.
Conquering space
Humanity might succeed at leaving the planet but at high costs.
With leaving the planet I mean permanently independent of earth but not neccessarily leaving the solar system any time soon (speculating on that is beyond my confidence interval).
I think it more likely that life leaves the planet - that can be
- artificial intelligence with a robotic body - think of curiosity rover 2.0 (most likely).
- intelligent life-forms bred for life in space - think of Magpies those are already smart, small, reproducing fast and have 3D navigation.
- actual humans in suitable protective environment with small autonomous biosperes harvesting asteroids or mars.
- 'cyborgs' - humans altered or bred to better deal with certain problems in space like radiation and missing gravity.
- other - including misc ideas from science fiction (least likely or latest).
For most of these (esp. those depending on breeding) I'd estimate a time-range of a few thousand years.
What could prove me wrong?
If I'm wrong on the singularity aspect too.
If I'm wrong on the timeline I will be long dead likely in any case except (1) which I expect to see in my lifetime.
Cognitive Base of Rationality, Vaguesness, Foundations of Math
How can we as humans create meaning out of noise?
How can we know truth? How does it come that we know that 'snow is white' when snow is white?
Cognitive neuroscience and artificial learning seems to point toward two aspects:
Fuzzy learning aspect
Correlated patterns of internal and external perception are recognized (detected) via multiple specialized layered neural nets (basically). This yields qualia like 'spoon', 'fear', 'running', 'hot', 'near', 'I'. These are basically symbols, but they are vague with respect to meaning because they result from a recognition process that optimizes for matching not correctness or uniqueness.
Semantic learning aspect
Upon the qualia builds the semantic part which takes the qualia and instead of acting directly on them (as is the normal effect for animals) finds patterns in their activation which is not related to immediate perception or action but at most to memory. These may form new qualia/symbols.
The use of these patterns is that the patterns allow to capture concepts which are detached from reality (detached in so far as they do not need a stimulus connected in any way to perception).
Concepts like ('cry-sound' 'fear') or ('digitalis' 'time-forward' 'heartache') or ('snow' 'white') or - and that is probably the demain of humans: (('one' 'successor') 'two') or (('I' 'happy') ('I' 'think')).
Concepts
The interesting thing is that learning works on these concepts like on the normal neuronal nets too. Thus concepts that are reinforced by positive feedback will stabilize and mutually with them the qualia they derive from (if any) will also stabilize.
For certain pure concepts the usability of the concept hinges not on any external factor (like "how does this help me survive") but on social feedback about structure and the process of the formation of the concepts themselves.
And this is where we arrive at such concepts as 'truth' or 'proposition'.
These are no longer vague - but not because they are represented differently in the brain than other concepts but because they stabilize toward maximized validity (that is stability due to absence of external factors possibly with a speed-up due to social pressure to stabilize). I have written elsewhere that everything that derives its utility not from some external use but from internal consistency could be called math.
And that is why math is so hard for some: If you never gained a sufficient core of self-consistent stabilized concepts and/or the usefulness doesn't derive from internal consistency but from external ("teachers password") usefulness then it will just not scale to more concepts (and the reason why science works at all is that science values internal consistency so highly and there is little more dangerous to science that allowing other incentives).
I really hope that this all makes sense. I haven't summarized this for quite some time.
A few random links that may provide some context:
http://www.blutner.de/NeuralNets/ (this is about the AI context we are talking about)
http://www.blutner.de/NeuralNets/Texts/mod_comp_by_dyn_bin_synf.pdf (research applicable to the above in particular)
http://c2.com/cgi/wiki?LeibnizianDefinitionOfConsciousness (funny description of levels of consciousness)
http://c2.com/cgi/wiki?FuzzyAndSymbolicLearning (old post by me)
http://grault.net/adjunct/index.cgi?VaguesDependingOnVagues (dito)
Note: Details about the modelling of the semantic part are mostly in my head.
What could prove me wrong?
Well. Wrong is too hard here. This is just my model and it is not really that concrete. Probably a longer discussion with someone more experienced with AI than I am (and there should be many here) might suffice to rip this appart (provided that I'd find time to prepare my model suitably).
God and Religion
I wasn't indoctrinated as a child. My truely loving mother is a baptised christian living it and not being sanctimony. She always hoped that I would receive my epiphany. My father has a scientifically influenced personal christian belief.
I can imagine a God consistent with science on the one hand and on the other hand with free will, soul, afterlife, trinity and the bible (understood as a mix of non-literal word of God and history tale).
I mean, it is not that hard if you can imagine a timeless (simulation of) the universe. If you are god and have whatever plan on earth but empathize with your creations, then it is not hard to add a few more constraints to certain aggregates called existences or 'person lifes'. Constraints that realize free-will in the sense of 'not subject to the whole universe plan satisfaction algorithm'.
Surely not more difficult than consistent time-travel.
And souls and afterlife should be easy to envision for any science fiction reader familiar with super intelligences.
But why? Occams razor applies.
There could be a God. And his promise could be real. And it could be a story seeded by an emphatizing God - but also a 'human' God with his own inconsistencies and moods.
But it also could be that this is all a fairy tale run amok in human brains searching for explanations where there are none. A mass delusion. A fixated meme.
Which is right? It is difficult to put probabilities to stories. I see that I have slowly moved from 50/50 agnosticism to tolerent atheism.
I can't say that I wait for my epiphany. I know too well that my brain will happily find patterns when I let it. But I have encouraged to pray for me.
My epiphanies - the aha feelings of clarity that I did experience - have all been about deeply connected patterns building on other such patterns building on reliable facts mostly scientific in nature.
But I haven't lost my morality. It has deepend and widened. I have become even more tolerant (I hope).
So if God does against all odds exists I hope he will understand my doubts, weight my good deeds and forgive me. You could tag me godless christian.
What could prove me wrong?
On the atheist side I could be moved a bit further by more proofs of religion being a human artifact.
On the theist side there are two possible avenues:
- If I'd have an unsearched for epiphany - a real one where I can't say I was hallucinating but e.g. a major consistent insight or a proof of God.
- If I'd be convinced that the singularity is possible. This is because I'd need to update toward being in a simulation as per Simulation argument option 3. That's because then the next likely explanation for all this god business is actually some imperfect being running the simulation.
Thus I'd like to close with this corollary to the simulation argument:
Arguments for the singularity are also (weak) arguments for theism.
[LINK] Being No One (~50 min talk on the self-model in your brain)
Summary: This is a ~50 minute talk (plus some introductory ado) by Thomas Metzinger on the problem of the experiencing, subjective self (why it exists, what it even means, how it arises). Not to be too cliché, but he attacks the problem by dissolving the question, and the solution he arrives at sounds a lot like how an algorithm feels from inside.
Using several examples from neuroscience (particularly the many illuminating failure modes of the brain), he explains how the brain models the self and its place in the center of experiential space. He discusses the limitations of our access to our own cognitive systems, and how those limitations force us to be naive realists.
I hesitate to summarize further, because there is a lot of value in hearing the entire argument. (I will say that he gets a little cute at the end, but that doesn't detract from the excellent content.)
Link: Being No One on Youtube.
(Normally I think LWers dislike the talk format because it's inherently time-consuming, but I'd say this one is information dense and well worth your time.)
Entangled with Reality: The Probabilistic Inferential Learning Model (Link)
If you flip a light switch and nothing happens, there are a couple of possible explanations. One is that something has gone wrong in the external world — maybe the bulb has burned out. Alternatively, you may have made a mistake, perhaps flipping the wrong switch.
Infants can integrate prior knowledge with statistical data to make these distinctions at a very young age [...] 16-month-old infants can, based on very little information, make accurate judgments of whether a failed action is due their own mistake or to circumstances beyond their control.
[...] very young infants can quickly learn basic principles about how the world works, then use those rules to interpret the statistical evidence they see. [...] “They can use very, very sparse evidence because they have these rich prior beliefs and they can use that to make quite sophisticated, quite accurate inferences about the world.”
In one condition, babies saw a toy that played music when one experimenter pushed a button on the toy but failed when a second experimenter tried, suggesting that the failure was due to the agent. In another condition, the button sometimes activated the toy and sometimes failed for each of the two experimenters, suggesting that something was wrong with the toy.
[...]
Depending on the circumstances of the experiment, the babies did respond differently, indicating that they were able to weigh evidence for each explanation and react accordingly. Infants who saw evidence suggesting the agent had failed tried to hand the toy to their parents for help, suggesting the babies assumed the failure was their own fault. Conversely, babies who saw evidence suggesting that the toy was broken were more likely to reach for a new toy (a red one that was always within reach).
[...] 16-month-olds could use very limited evidence (the distribution of outcomes across the experimenters’ actions) to infer the source of failure and decide whether to ask for help or seek another toy. That finding lends strong support to the probabilistic inferential learning model.
“What the finding here seems to be is kids are smart, but the way in which they’re smart is really tracking statistics. They’re really good learners,” she says.
Link: web.mit.edu/newsoffice/2011/assigning-fault-0624.html
PDF: web.mit.edu/~hyora/www/Hyo/Research_files/GweonSchulzCogSci2010.pdf
Study by MIT cognitive scientists Laura Schulz and Hyowon Gweon appears in the June 24 issue of Science
Simple embodied cognition hacks
I've known that the mind can be affected by the body's actions, but I often forget this when sitting at my computer chair for long stretches, and when standing and interacting in social situations I've subconciously cultivated a passive, non-confrontational but minimally interactive posture. But simple physical actions can act as a mild nootropic for certain situations.
Article with citations: 10 Simple Postures that Boost Performance
Article summary:
1. Take a powerful pose to feel powerful
2. Tense muscles for willpower
3. Cross arms for persistence
4. Lie down for insight
5. Nap for cognitive performance, vigour and wakefulness
6. Hand gestures for persuasion
7. Gesture to self for comprehension and memory
8. Smile for happiness
9. Mimic to empathize
10. Imitate for comprehension and prediction
Does cognitive therapy encourage bias?
"Cognitive behavioral therapy" (CBT) is a catch-all term for a variety of therapeutic practices and theories. Among other things, it aims to teach patients to modify their own beliefs. The rationale seems to be this:
(1) Affect, behavior, and cognition are interrelated such that changes in one of the three will lead to changes in the other two.
(2) Affective problems, such as depression, can thus be addressed in a roundabout fashion: modifying the beliefs from which the undesired feelings stem.
So far, so good. And how does one modify destructive beliefs? CBT offers many techniques.
Alas, included among them seems to be motivated skepticism. For example, consider a depressed college student. She and her therapist decide that one of her bad beliefs is "I'm inadequate." They want to replace that bad one with a more positive one, namely, "I'm adequate in most ways (but I'm only human, too)." Their method is to do a worksheet comparing evidence for and against the old, negative belief. Listen to their dialog:
[Therapist]: What evidence do you have that you're inadequate?
[Patient]: Well, I didn't understand a concept my economics professor presented in class today.
T: Okay, write that down on the right side, then put a big "BUT" next to it...Now, let's see if there could be another explanation for why you might not have understood the concept other than that you're inadequate.
P: Well, it was the first time she talked about it. And it wasn't in the readings.
Thus the bad belief is treated with suspicion. What's wrong with that? Well, see what they do about evidence against her inadequacy:
T: Okay, let's try the left side now. What evidence do you have from today that you are adequate at many things? I'll warn you, this can be hard if your screen is operating.
P: Well, I worked on my literature paper.
T: Good. Write that down. What else?
(pp. 179-180; ellipsis and emphasis both in the original)
When they encounter evidence for the patient's bad belief, they investigate further, looking for ways to avoid inferring that she is inadequate. However, when they find evidence against the bad belief, they just chalk it up.
This is not how one should approach evidence...assuming one wants correct beliefs.
So why does Beck advocate this approach? Here are some possible reasons.
A. If beliefs are keeping you depressed, maybe you should fight them even at the cost of a little correctness (and of the increased habituation to motivated cognition).
B. Depressed patients are already predisposed to find the downside of any given event. They don't need help doubting themselves. Therefore, therapists' encouraging them to seek alternative explanations for negative events doesn't skew their beliefs. On the contrary, it helps to bring the depressed patients' beliefs back into correspondence with reality.
C. Strictly speaking, this motivated cognition does not lead to false beliefs because beliefs of the form "I'm inadequate," along with its more helpful replacement, are not truth-apt. They can't be true or false. After all, what experiences do they induce believers to anticipate? (If this were the rationale, then what would the sense of the term "evidence" be in this context?)
What do you guys think? Is this common to other CBT authors as well? I've only read two other books in this vein (Albert Ellis and Robert A. Harper's A Guide to Rational Living and Jacqueline Persons' Cognitive Therapy in Practice: A Case Formulation Approach) and I can't recall either one explicitly doing this, but I may have missed it. I do remember that Ellis and Harper seemed to conflate instrumental and epistemic rationality.
Edit: Thanks a lot to Vaniver for the help on link formatting.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)