I have already written all of the posts in this sequence, although I may make edits to later ones in response to feedback on earlier ones, and it's not impossible that someone will ask me something that seems to indicate I should write an additional post.
This preparation sounds great. Thank you for taking such care with the writing, and with providing this introduction. The idea of thorough, regulated introspection is new to me, and I'm looking forward to hearing from somebody who's put a lot of thought into it.
A site where people (1) do deep original thinking, then (2) spend considerable time and effort to write accessibly about it, and (3) refine the ideas through civil discussion: all of these things are so rare that the combination of them on this site makes it the best philosophy/discussion forum I've ever been a part of.
The ABC's of Luminosity.
Surely you mean The RGB's of Luminosity. Ahem.
I like that you're including forward links in your sequence. (I still think LW ought to automatically include adjacent-post-by-date-order links, too.)
I actually have things that start with A, B, and C, and I didn't even have to contrive too hard.
An alief is an independent source of emotional reaction which can coexist with a contradictory belief. For example, the fear felt when a monster jumps out of the darkness in a scary movie is based on the alief that the monster is about to attack you, even though you believe that it cannot.
Searching for alief and belief together brought up this relevant PDF.
Thanks - just learning that concept has actually appreciably increased my (self) understanding.
In case it isn't obvious to people: The name is a pun. If there are "b"-liefs there must be "a"-liefs. One way to think about an alief is as a kind of proto-belief.
I would assume that cesire is a modified version of desire, possibly a tendency to act to further a certain cause even if you desire something else.
At the time that I encountered rationalist fiction, I thought it was interesting but not especially relevant.
Then I skimmed through the Sequences briefly and realized that I was already working out a concept extremely similar to this one, under a different name but with the same methods and goals. This convinced me that at least some people in this subculture probably knew what they are talking about.
Encountering a more developed concept of luminosity that looked like my previous concepts of "radical self-knowledge" also gives me a good place to link to when explaining the concept to the uninitiated and better keywords to search with when looking for books and articles. (It's called heuristics and biases, not structural brain quirks...)
I have used similar techniques independently discovered to increase happiness*. I also frequently draw comment for being unusually self-aware.
Alicorn, thank you for writing this sequence. I like not feeling like the lone dissenter, however effective the methods actually are.
-* There was previously another statement here that it turns out was extremely premature. 6-10-12
This sequence preview looks definitely promising...
...and, to a noob (that is, a me in the grip of Mind Projection Fallacy) screams "WEIRD SELF-HELP CULT" in huge neon letters. Anyone else notice this?
To a first approximation all non trivial advice on messing with the workings of your own head sounds weird; and self-help has a bad reputation because most of the people who consume it are losers, not winners looking to win harder. Also, honestly there are weirder, cultier things on the site, anti-deathism for one.
The rest of the sequence looks like it will be excellent. I think evidential introspection is a wonderful topic for this site.
FWIW, this is more commonly known as "cognitive behavioural therapy", with focus on "schema therapy".
I just reread these and they're great! I didn't think much of them at the time, but I seem to have internalized them and actually fixed some problems in my life as a result.
Thanks!
Brilliant idea for a series! I spend a lot of time thinking about this; trying to understand my thoughts and consequently hack them.
It's really interesting how much variation there is in people's ability to comprehend the origin of thoughts. Also it's surprising how little control, or desire for control, some people have over their decisions. Certainly seems like something that can be learnt and changed over time. I've seen some significant improvements myself over the past 12 months without many exterior environmental changes.
The main hurdle I hit up against is confidence in my conclusions - introspection can't be scientific by definition. I find it really difficult to measure improvement over time. Definitely interested to see how you deal with this!
introspection can't be scientific by definition
What you observe via introspection, is not accessible to third parties, yes.
But you use those observations to build models of yourself. Those models can be made explicit and communicated to others. And they make predictions about your future behavior, so they can be tested.
Rachel had nominated Leah to test the pack's range, and Leah had run all the way to Canada (but not near Denali, thankfully). There was no noticeable delay, static, or loss of fidelity to the telepathy.
This is just begging for more tests! ;)
This looks an interesting subject! Introspection is a bit of a difficult research assistant, but in some cases, the best that we have.
A minor point, you write that
Luminosity, as I'll use the term, is self-awareness
and also that the term 'luminosity' is already in use in a related, but different sense. Would it then not be clearer to simple call it 'self-awareness'? Or something else, say 'lucidity' (I'm sure there's something better), if you want diverge from what's normally meant with self-awareness.
Anyway, looking forward to the rest of the sequence.
I think it doesn't hurt to have a term that calls up not only the notion of self-awareness, but also the attitude that Alicorn is creating about it. It will also help indicate the coherence of the sequence.
I love the standard that LessWrong.com sets for philosophy, and will be extremely pleased if this sequence can meet that standard on such an important topic!
Meta-cognition is the standard term for "luminosity". The Wikipedia entry might be an interesting read. I have done a lot of mind hacking, myself. :)
If you gain root, do release the source code for your patches. You might think you're just making some improvements, but... after a while, too many new improvements can become more like a new human operating system. You can become so different that people will not be able to understand you anymore.
Re-arranging your consciousness is serious business. Don't take it lightly. Aside from the social consequences, there are also system design pitfalls.
The following posts may be useful background material: Sorting Out Sticky Brains; Mental Crystallography; Generalizing From One Example
I took the word "luminosity" from "Knowledge and its Limits" by Timothy Williamson, although I'm using it in a different sense than he did. (He referred to "being in a position to know" rather than actually knowing, and in his definition, he doesn't quite restrict himself to mental states and events.) The original ordinary-language sense of "luminous" means "emitting light, especially self-generated light; easily comprehended; clear", which should put the titles into context.
Luminosity, as I'll use the term, is self-awareness. A luminous mental state is one that you have and know that you have. It could be an emotion, a belief or alief, a disposition, a quale, a memory - anything that might happen or be stored in your brain. What's going on in your head? What you come up with when you ponder that question - assuming, nontrivially, that you are accurate - is what's luminous to you. Perhaps surprisingly, it's hard for a lot of people to tell. Even if they can identify the occurrence of individual mental events, they have tremendous difficulty modeling their cognition over time, explaining why it unfolds as it does, or observing ways in which it's changed. With sufficient luminosity, you can inspect your own experiences, opinions, and stored thoughts. You can watch them interact, and discern patterns in how they do that. This lets you predict what you'll think - and in turn, what you'll do - in the future under various possible circumstances.
I've made it a project to increase my luminosity as much as possible over the past several years. While I am not (yet) perfectly luminous, I have already realized considerable improvements in such subsidiary skills like managing my mood, hacking into some of the systems that cause akrasia and other non-endorsed behavior, and simply being less confused about why I do and feel the things I do and feel. I have some reason to believe that I am substantially more luminous than average, because I can ask people what seem to me to be perfectly easy questions about what they're thinking and find them unable to answer. Meanwhile, I'm not trusting my mere impression that I'm generally right when I come to conclusions about myself. My models of myself, after I stop tweaking and toying with them and decide they're probably about right, are borne out a majority of the time by my ongoing behavior. Typically, they'll also match what other people conclude about me, at least on some level.
In this sequence, I hope to share some of the techniques for improving luminosity that I've used. I'm optimistic that at least some of them will be useful to at least some people. However, I may be a walking, talking "results not typical". My prior attempts at improving luminosity in others consist of me asking individually-designed questions in real time, and that's gone fairly well; it remains to be seen if I can distill the basic idea into a format that's generally accessible.
I've divided up the sequence into eight posts, not including this one, which serves as introduction and index. (I'll update the titles in the list below with links as each post goes up.)
I have already written all of the posts in this sequence, although I may make edits to later ones in response to feedback on earlier ones, and it's not impossible that someone will ask me something that seems to indicate I should write an additional post. I will dole them out at a pace that responds to community feedback.