One example was the self-help blog of Phillip Eby (pjeby), where each new post seemed to bring new amazing insights, and after a while you became jaded.
Er, you do realize I stopped most of my blogging for more or less that reason, right?
Around that time, I started pushing for a (partly LW-inspired) greater focus on empirical improvement in my work, because there was just too much randomness in how long the effects of my then-current techniques would last. Some things were permanent or nearly-so, and others might only last a few days or weeks... and I had no reliable way to predict what the outcome of a particular instance of application would be.
It was a tough shift, because at the time I also had no way to know for sure that anything more reliable or predictable in fact existed, but unlike the more "faith-based" self-help folks, I couldn't just keep ignoring the anomalies in my results.
The good news is I got over that hump and developed more reliable methods. The bad news is that it didn't involve brilliant simple epiphanies, but lots and lots of little hard-won insights and the correlation of tons of practical knowledge.
(And one of those bits of practical knowledg...
Phillip, I apologize for using you as an example, but will still keep it in the post because it's such a nice example :-) It's very good to hear that you came to similar conclusions eventually, I didn't know that!
it's such a nice example :-)
Perhaps it would become an even better example, then, by adding a link to the relevant post, e.g. "and after a while you became jaded, until even he realized it was a loop". ;-)
Think of it as superstimulus to the cool-idea sensor.
Thought exercise: could all the LW/CFAR-favoured model of epistemic rationality be ineffective, even though it sounds really good and make sense? What would the world look like in this case? What would you expect if LW rationality didn't actually work, except to convince its fans that it did work? (For a value of "work" that is defined before examining the results.)
could all the LW/CFAR-favoured model of epistemic rationality be ineffective, even though it sounds really good and make sense?
Effective at what? I agree with Yvain that:
I think it may help me succeed in life a little, but I think the correlation between x-rationality and success is probably closer to 0.1 than to 1. Maybe [higher] in some businesses like finance, but people in finance tend to know this and use specially developed x-rationalist techniques on the job already without making it a lifestyle commitment.
Hard work, intelligence, social skill, attractiveness, risk-taking, need for sleep, height, and enormous amounts of noise go into life success as measured by something like income or occupational status. So unless there were a ludicrously large effect size of hanging around Less Wrong, differences in life success between readers and nonreaders would be overwhelmingly driven by selection effects. Now, in fact those selection effects put the LW population well above average (lots of college students, academics, software engineers, etc) but don't speak much to positive effects of their reading habits.
To get a good picture of that you would need a randomized experiment,...
Major elements to consider:
Mostly standard arguments, often with nonstandard examples and lively presentation, for a related cluster of philosophical views: physicalism, the appearance of free will as outgrowth of cognitive algorithm, his brand of metaethics, the Everett interpretation of quantum mechanics, the irrelevance of verbal disputes, etc.
A selective review of the psychology heuristics and biases literature, with entertaining examples and descriptions
A bunch of suggested heuristics, based on personal experience and thought, for debiasing, e.g. leaving a line of retreat to reduce resistance
Some thoughtful exposition of applications of intro probability and Bayes' theorem, e.g. conservation of expected evidence
Interesting reframings and insights into a number of philosophical problems using the Solomonoff Induction framework, and the "how could this intuition emerge from an algorithm?" approach
Debate about AI with Robin, a science fiction story, a bunch of meta posts, and assorted minor elements
..."So, there's a million words of these Sequences that you think I should read. What do I get out of reading them?" then what's the answer to that?
That's a difficult question, but a potentially valuable one to have answered. Here's a long list of thoughts I came up with, written not to Michael Vassar but to a regular supporter of SI:
Donations are maximally fungible and require no overhead or supervision. From the outside, you may not see how a $5000 donation to SI changes the world, but I sure as hell do. An extra $5000 means I can print 600 copies of a paperback of the first 17 chapters of HPMoR and ship one copy each to the top 600 most promising young math students (on observable indicators, like USAMO score) in the U.S. (after making contact with them whenever possible). An extra $5000 means I can produce nicely-formatted Kindle and PDF versions of The Sequences, 2006-2009 and Facing the Singularity. An extra $5000 means I can run a nationwide essay contest for the best high school essay on the importance of AI safety (to bring the topic to the minds of AI-interested high schoolers, and to find some good writers who care about AI safety). An extra $5000 means I can afford a bit more than a month of work from a new staff researcher (including salary, health coverage, and taxes).
Remember that a good volunteer is ha
I'm skeptical of the 'enormous amounts of noise' claim
Trivially, look at the wealth of Bill Gates vs Steve Jobs. Most of Peter Thiel's wealth relative to other past tech CEOs comes from one great hit at Facebook. Even entrepreneurs who have succeeded at past VC-backed startups are only moderately more likely to succeed (acquisition, IPO, large size) than new ones. Financiers vary hugely in lifetime career success based on market conditions on Wall Street when they finished school, on which product groups have ups and downs when, and which risky bets happen to blow up before or after they move on.
Within a given size of social circle and selective filter, happening to have the right friends with the right contacts (Jobs and Wozniak) at the right time is critical. Who else produces a similar startup at the same time and how good are they? Do key patents and lawsuits get decided in one's favor? What new scientific and technological innovations enhance or destroy the position of one's company?
At a smaller scale: when do you fall in love and get married? What geographical constraints does that place on you? Do you get hit by a car or infectious disease or cancer, and when? Do you get through noisy hiring processes in tight labor markets, e.g. tenure in academia, getting a first job on Wall Street? Do you click with the person deciding on your medical residency of choice?
We could quibble, but I'd leave it at that.
I sympathize with the statement, which you may or may not have implied, that that world would look a lot like our world. But maybe we should make the question more concrete. What benefits do people honestly expect from LW rationality? Are they actually getting those benefits?
I'm here because and while it's enjoyable - LW is marked as part of the Internet-as-TV time budget. That said, I feel more rational, I think because I'm paying attention to my thoughts. But e.g. I'm not actually richer and don't have a string of interesting new achievements under my belt. The outside view shows nothing.
If your answer is "it would look like the world is now" - then what would the world look like if it was effective and did work, for whatever value of "work"? (I'm thinking a value something like "what one would expect trying a new thing like this and wanting to get tangible self-improvement value out of it", though I'm open to other possible values I haven't thought of.)
Hard to say. My life would look completely different. I was honestly, for the most part, much happier before getting involved, but I'm certainly more effective now, to the point of not really occupying the same reference class in any useful sense.
When I was young I was known as "the shy one," and I was awkward around girls. So I started reading instructional books on dating. A few chapters in, each book said "The most important thing is that you put down this book right now and go practice the thing I just told you to do." But I just kept reading, because I was learning so much, and having all those epiphanies felt like getting stronger.
After two years of epiphany addiction and no sex, I finally took some liquid courage and went out and actually talked to women. And then I started to become stronger.
If CFAR and the JDM community can invent an applied rationality that reliably makes people more powerful, it won't be because they've written lots of epiphany-producing writing. It will be because they've discovered teachable rationality skills that can be practiced day after day.
people get a rush of power from neat-sounding realizations, and mistake that feeling for actual power
How do you tell the difference?
Yeah. It's realising that epiphany is an aesthetic experience, and requires results before it's the life-change it labels itself as. Epiphanies can in fact be just another way to fool yourself.
For what it's worth, I agree with the spirit of your comment, but am also a little tired of seeing endless variations of it. LW needs better contrarians, but being a good contrarian takes effort. Maybe you could write a discussion post that lays out the strongest form of your arguments? I volunteer to read and comment on drafts, if you wish.
Related:
"Understanding is the booby prize."
Said during a personal development course I've done.
And an outside view of what not getting stuck in epiphanies looks like:
In principle, it’s simple. You’re looking for people who are
Smart, and
Get things done.
A related note is that the neurophysiological effect of the epiphany wears out really quickly. I haven't studied which neurotransmitters exactly produce the original good feeling, but I remember reading (apologies for not having a source here) that the effect is pretty strong for the first time, but fails to produce pretty much any neurological effect after just few repeats. By repeats, I mean thinking about the concept or idea and perhaps writing about it.
In another words, say you get a strong epiphany and subsequent strong feeling that some technique, f...
Nothing works if people don't actually change their behavior, so the place to start, IMHO, is looking into who actually changes their behavior after encountering new information. Figuring out what causes that would take you very far. My vague impression is that it's closely related to distrust of authority. If one trusts authority, any change takes you farther away from a trusted safe state and thus carries a large hidden cost.
My vague impression is that it's closely related to distrust of authority. If one trusts authority, any change takes you farther away from a trusted safe state and thus carries a large hidden cost.
On the other hand, unless you have the enormously rare constellation of talent and circumstances to give you a realistic chance to rise to the very top, too little trust in authority leads to a state of frightened paralysis or downright self-destruction. What you need for success is the instinct to recognize when you should obey the powers-that-be with your heart and your mind, and when to ignore, defy, or subvert them.
The ability to conform to the official norms and trust the official dogma with full honesty when it's optimal to do so is just as important as the ability to ignore, defy, and subvert them in other cases. Otherwise your distrust of authority will lead you either to cower in fear of it or to provoke its wrath and be destroyed. A well-calibrated unconscious strategic instinct to switch between conformity and non-conformity is, in my opinion, one of the main things that sets apart greatly successful people from others.
It seems to me that the decision theory generally favors acting as if one has rare talent and circumstances, as opposed to the alternative, more likely hypothesis, which is probably the contrarian hypothesis of being a simulation in any event. Attempts to justify common sense, treated honestly, generally end up as justifications of novel contrarian hypotheses instead.
Also, one who tries to conform to official norms rather than to ubiquitous surrounding behavioral patterns will rapidly find oneself under attack, nominally for violating official norms. I think that the way to go is usually to conform but also to recognize that the standards to which one is conforming do not correspond to explicit beliefs at all, or even to implicit decision theories.
Treat social reality as a liquid in which one swims, not an intellectual authority, but don't attack a liquid without some very heavy ammo, and generally don't attack it even if one has such ammo, it's not an enemy, an agent, or a person.
It's demonstrative that my first reaction to reading this was a feeling of epiphany, and that if I could only internalize this lesson all my problems would be solved. The problem is very pervasive. Becoming aware of it initially actually just made things worse, so I'm commenting to point it out to make sure that no one else risks falling into that trap.
There's also the possibility of infinite regress here, funnily enough. Don't do that either.
I'm amused by the creativity of my cognitive glitches, sometimes.
An alternative hypothesis would be adaptation:
According to adaptation theory, individuals react to events, but quickly adapt back to baseline levels of subjective well-being. To test this idea, the authors use data from a 15-year longitudinal study of over 30 000 individuals to examine the effects of marital transitions on life satisfaction. On average, individuals reacted to events and then adapted back towards baseline levels. However, there were substantial individual differences in this tendency.
Perhaps some of the epiphanies really are transformat...
I don't think we have got the right explanation for our epiphany addiction here. We are "addicted" to epiphanies because that is what our community rewards its members. Even if the sport is ostensibly about optimizing one's life, the actual sport is to come up with clever insights into how to optimize one's life. The incentive structure is all wrong. The problem ultimately comes down to us being rewarded more status for coming up with and understanding epiphanies than for such epiphanies having a positive impact on our lives.
I independently invented a similar concept, "epiphany junkies", but didn't get around to posting it yet. A couple of points that would've been in that post:
If you use Gmail, you can enable the "undo send" feature in settings. I use it a lot, with the longest possible timeout (30 seconds), and think the timeout should be even longer, like 5 minutes.
Yes, you have a point that success in other fields would be good sign. But your example is a careless one.
You know, Einstein also invented a fridge
According to this io9 article, he did that in his late 40s to early 50s, after his great physics work was over. He was born in 1879 and worked on the fridge with Szilard from 1926 or after. It made the two physicists a bit of money but was not very practically useful. It certainly wasn't something you could have used to predict his physics success in advance, or that he did on the side while occupied with f...
It seems to me that most of your comments on LW are about the same thing. This predictability makes them boring.
It's like -- oh, here is some discussion about a possible problem; I bet PM will soon come and write a reply saying "yes, your worst fears are all true, and it is actually much worse".
At least for me, the predictable pattern suggests that I should ignore such comments. There is no point in paying attention individually to comments that were generated by a pattern. I perceive them all as one comment, repeated on LW endlessly.
Epiphanies are not necessarily useless or wrong, but cannot effect anything unless they are part of a system created already in motion. You can no more apply an epiphany to an unfocused, inactive human than you can apply it to a rock.
Great point! One of the problems here is that people think that just knowing about something is going to give them the power but this is not the case. Rationality is a skillset like bicycle riding or playing chess and the only way to get good at it is by practicing a lot. You can read lots of books of chess and get great insights but when it comes down to actually playing at the board what matters is what you have internalized through practice.
I've had similar experiences to what the quote describes. I'm in a bit of a rush and have not actually read the link, but here's my thoughts:
Is the epiphany purely in a way of thinking about things, or does it lead to some material change? In other words, is it actionable? For example, if I come up with new way to frame work for extra motivation, I don't put much stock in it, because I know that my mental state is highly variable and it sometimes just won't work. I write it down, and think about it, and see how long it lasts, but I don't expect it to have ...
LW is itself contrarian, for nth time. All it needs is to look outside itself.
Ignoring definitions of words for the moment, it seems to me that you consider "contrarian" comments worth writing, otherwise you wouldn't write them. All I'm saying is if they're worth writing, they're worth writing well.
Yes, I am very familiar with this kind of experience. I think the point about singular epiphanies of this sort is that they are always too brittle and inflexible to carry you on in any meaningful, long-term sort of way. Two further comments:
The realization of "epiphany addiction" it itself a sort of epiphany, in the same sense that this discussion is talking about. I'm not sure what the punchline of -that- should be, except maybe to say, there doesn't seem to ever be any such "magic bullets" in terms of personal understanding ... . Ye
Too much of the self-help and popular psychological literature are written like stories, which, while make them more readable and more likely to be read, tends to encourage readers to keep on reading through it all. If you are reading for change, you need to read it like a textbook, for the information, rather than entertainment.
This is why most of the successful self-help gurus pack their books full of stories and insights, but leave the actual training for in-person workshops, or at least for higher-bandwidth or interactive media. Most of the challenges people will have in applying almost anything can't be listed in a book, without creating an unreadable (or at least unsellable) book.
While this is also the most financially beneficial way to do it, I have personally observed over and over that there are certain classes of mental mistake that you simply CANNOT reliably correct in non-interactive media, because the person making the mistake simply can't tell they're making the mistake unless you point out an example of it in their own behavior or thinking. Otherwise, the connection between the pattern of mistake and the instance of it remains opaque to them. People are much b...
I'm a sucker for rsdmotivation. I feel more confident, more self esteem, happy and exercise better listening to it. But, I can't deconstruct what's actually meaningful in it. It's mostly aphorisms and very basic insights about the world, over cinematic music. The cinematic music alone doesn't do it for me, and the aphorisms alone don't do it for me. Yet, I feel that 'ephiphany' every time. It sorta annoys me, but it works for now.
Comparably, it takes listening to Rise and Grind to get me up sometimes. It works very very well, but it's unclear to me how or ...
LW doesn't seem to have a discussion of the article Epiphany Addiction, by Chris at succeedsocially. First paragraph:
I like that article because it describes a dangerous failure mode of smart people. One example was the self-help blog of Phillip Eby (pjeby), where each new post seemed to bring new amazing insights, and after a while you became jaded. An even better, though controversial, example could be Eliezer's Sequences, if you view them as a series of epiphanies about AI research that didn't lead to much tangible progress. (Please don't make that statement the sole focus of discussion!)
The underlying problem seems to be that people get a rush of power from neat-sounding realizations, and mistake that feeling for actual power. I don't know any good remedy for that, but being aware of the problem could help.