My simple hack for increased alertness and improved cognitive functioning: very bright light
This is a simple idea that I came up with by myself. I was looking for a means to enter high functioning lots-of-beta-waves modes without the use of chemical stimulants. What I found was that very bright light works really, really well.
I got the brightest light bulbs I could get cheaply. 105 watts of incandescents with halogen gas, billed as the equivalent of 130 watts of incandescent light. And I got an adaptor like this that lets me screw four of those into the same socket in the ceiling. The result is about as painful to look at as the sun. It makes my (small) room brighter than a clear summer's day at my latitude and slightly brighter than a supermarket.
I guess it affects adenosine much like caffeine does because that's what it feels like. Yet unlike caffeine, it can be rapidly turned on and off, literally with the flip of a switch.
For waking up in the morning, I find bright light more effective than a 200mg caffeine tablet, although my caffeine tolerance is moderate for a scientist.
I have not compared the effects of very bright light to modafinil, which requires a prescription in my country.
When under this amount of light, I need to remind myself to go to bed, because I tire about three hours later than with common luminosity. Yet once I switch it off, I can usually sleep within a few minutes, as (I'm guessing) a flood of unblocked adenosine suddenly overwhelms me. I used to have those unproductive late hours where I was too awake to sleep but too tired to be smart. I don't have those anymore.
You've probably heard of light therapy, which uses light to help manage seasonal affective disorder. I don't have that issue, but I definitely notice that the light does improve my mood. (Maybe that's simply because I like to function well.) I'm pretty sure the expensive "light therapy bulbs" you can get are scams, because the color of the light doesn't actually make a difference. The amount of light does.
One nice side benefit is that it keeps me awake while meditating, so I don't need the upright posture that usually does that job. Without the need for an upright posture, I can go beyond two hours straight, which helps enter more profoundly altered states.
After about 10 months of almost daily use of this lighting, I have not noticed any decrease in effectiveness. I do notice I find normally-lit rooms comparatively gloomy, and have an increasingly hard time understanding why people tolerate that. Supermarkets and offices are brightly lit to make the rats move faster - why don't we do that at our homes and while we're at it, amp it up even further? After all, our brains were made for the African savanna, which during the day is a lot brighter than most apartments today.
Since everyone can try this for a few bucks, I hope some of you will. If you do, please provide feedback on whether it works as well for you as it does for me. Any questions?
One thousand tips do not make a system
So, I've been thinking. We ought to have a system for rationality. What do I mean?
Well, consider a real-time strategy game like Starcraft II. One of the most important things to do in SC2 is macromanagement: making sure that your resources are all being used sensibly. Now, macromanagement could be learned as a big, long list of tips. Like this:
- Try to mine minerals.
- Recruit lots of soldiers.
- Recruit lots of workers.
- It's a good idea for a mineral site to have between 22 and 30 workers.
- Workers are recruited at a command center.
- Soldiers are recruited at a barracks.
- In order to build anything, you need workers.
- In order to build anything, you also need minerals.
- For that matter, in order to recruit more units, you need minerals.
- Workers mine minerals.
- Minerals should be used immediately; if you're storing them, you're wasting them.
Thoughts on designing policies for oneself
Note: This was originally written in relation to this rather scary comment of lukeprog's on value drift. I'm now less certain that operant conditioning is a significant cause of value drift (leaning towards near/far type explanations), but I decided to share my thoughts on the topic of policy design anyway.
Several years ago, I had a reddit problem. I'd check reddit instead of working on important stuff. The more I browsed the site, the shorter my attention span got. The shorter my attention span got, the harder it was for me to find things that were enjoyable to read. Instead of being rejuvenating, I found reddit to be addictive, unsatisfying, and frustrating. Every time I thought to myself that I really should stop, there was always just one more thing to click on.
So I installed LeechBlock and blocked reddit at all hours. That worked really well... for a while.
Occasionally I wanted to dig up something I remembered seeing on reddit. (This wasn't always bad--in some cases I was looking up something related to stuff I was working on.) I tried a few different policies for dealing with this. All of them basically amounted to inconveniencing myself in some way or another whenever I wanted to dig something up.
After a few weeks, I no longer felt the urge to check reddit compulsively. And after a few months, I hardly even remembered what it was like to be an addict.
However, my inconvenience barriers were still present, and they were, well, inconvenient. It really was pretty annoying to make an entry in my notebook describing what I was visiting for and start up a different browser just to check something. I figured I could always turn LeechBlock on again if necessary, so I removed my self-imposed barriers. And slid back in to addiction.
After a while, I got sick of being addicted again and decided to do something about it (again). Interestingly, I forgot my earlier thought that I could just turn LeechBlock on again easily. Instead, thinking about LeechBlock made me feel hopeless because it seemed like it ultimately hadn't worked. But I did try it again, and the entire cycle then finished repeating itself: I got un-addicted, I removed LeechBlock, I got re-addicted.
This may seem like a surprising lack of self-awareness. All I can say is: Every second my brain gathers tons of sensory data and discards the vast majority of it. Narratives like the one you're reading right now don't get constructed on the fly automatically. Maybe if I had been following orthonormal's advice of keeping and monitoring a record of life changes attempted, I would've thought to try something different.
Stop learning, start thinking [LINK]
Stop Learning, by Jacob Barnett. This is an 18 minute video, and I think there's a lot to be said for getting the material in the order given.
However, if you'd rather have text, here it is in rot13. Wnpbo Oneargg pbzrf bss nf naablvat. Ybhq, ohzcgvbhf, naq ynhtuf ng zbfg bs uvf bja wbxrf. Ur'f nyfb n zngu cebqvtl, naq unf nhgvfz. Ur gnyxf nobhg ubj ur jnf qvntabfrq nf orvat hanoyr gb yrnea gb gnyx, ohg orpnhfr ur unq gvzr gb guvax, ur fgnegrq rkcybevat zngu. Vg'f abg fb onq gb snvy svatre-cnvagvat. Ur gnyxf nobhg Arjgba naq Rvafgrva nf univat orra oybpxrq bss sebz yrneavat sbe n juvyr (cynthr dhnenagvar naq cngrag bssvpr erfcrpgviryl), fb gung gurl unq gvzr gb guvax. Ur erpbzzraqf gnxvat gvzr gb guvax nobhg jung lbh pner nobhg.
Would anyone happen to remember the alternate history story where the plague doesn't come to England, so Newton has professorial duties and never discovers anything?
Argument by lexical overloading, or, Don't cut your wants with shoulds
I used the word "cut" in the title to mean the Prolog operator "cut", an operator which halts the evaluation of a statement in predicate logic.
Fiction writers often complain, "I keep procrastinating from writing," and, "Nobody reads what I write." These complaints are usually the result of shoulds stopping them from thinking about their wants.
I've never heard anyone say, "I keep putting off playing baseball," or, "I keep putting off eating ice cream." People who keep putting off writing don't want to write, they want to have written. If you have to try to write more often than you have to try not to write, you've probably told yourself that you should write in order to attain some reward. There's nothing wrong with that, but writers who complain that they keep putting off writing are often writing things with little potential payoff, like fan-fiction. They don't stop and think how to improve the payoff that they want, because they get stuck on the should that they've cached in their heads.
I've repeatedly tried to help writers who complain that not enough people read what they write. I explain that if you want to be read by a lot of people, you need to write something that a lot of people want to read. This seems obvious to me, but I'm always immediately attacked by indignant writers saying that they want to write great fiction, and that one should write only to please oneself in order to write great fiction. Sometimes these are the same people who complained that they want more people to read what they write.
Why does their desire to write great fiction take complete precedence over their desire to have readers? Because they have cached that desire as a should. (They haven't cached a should for their goal to get more readers because that goal arose much later, after they had already learned to write well and discovered, to their horror, that just writing well doesn't bring you readers.) For a moral agent, shoulds trump wants, by definition.
I've explained before that I don't think there is any deep difference between wants and shoulds. The English language doesn't pretend there is; we say "I should do X" both to mean "I have a moral obligation to do X" and "I need to do X to satisfy my goals." The problem is that most people think there is a difference, and that shoulds are more important. They have a want, they figure out what they need to do to satisfy it, they think aloud to themselves that they should do it, and boom, they have lexically convinced themselves that they have a moral obligation to do it.
How To Have Things Correctly
I think people who are not made happier by having things either have the wrong things, or have them incorrectly. Here is how I get the most out of my stuff.
Money doesn't buy happiness. If you want to try throwing money at the problem anyway, you should buy experiences like vacations or services, rather than purchasing objects. If you have to buy objects, they should be absolute and not positional goods; positional goods just put you on a treadmill and you're never going to catch up.
Supposedly.
I think getting value out of spending money, owning objects, and having positional goods are all three of them skills, that people often don't have naturally but can develop. I'm going to focus mostly on the middle skill: how to have things correctly1.
Taking "correlation does not imply causation" back from the internet
(An idea I had while responding to this quotes thread)
"Correlation does not imply causation" is bandied around inexpertly and inappropriately all over the internet. Lots of us hate this.
But get this: the phrase, and the most obvious follow-up phrases like "what does imply causation?" are not high-competition search terms. Up until about an hour ago, the domain name correlationdoesnotimplycausation.com was not taken. I have just bought it.
There is a correlation-does-not-imply-causation shaped space on the internet, and it's ours for the taking. I would like to fill this space with a small collection of relevant educational resources explaining what is meant by the term, why it's important, why it's often used inappropriately, and the circumstances under which one may legitimately infer causation.
At the moment the Wikipedia page is trying to do this, but it's not really optimised for the task. It also doesn't carry the undercurrent of "no, seriously, lots of smart people get this wrong; let's make sure you're not one of them", and I think it should.
The purpose of this post is two-fold:
Firstly, it lets me say "hey dudes, I've just had this idea. Does anyone have any suggestions (pragmatic/technical, content-related, pointing out why it's a terrible idea, etc.), or alternatively, would anyone like to help?"
Secondly, it raises the question of what other corners of the internet are ripe for the planting of sanity waterline-raising resources. Are there any other similar concepts that people commonly get wrong, but don't have much of a guiding explanatory web presence to them? Could we put together a simple web platform for carrying out this task in lots of different places? The LW readership seems ideally placed to collectively do this sort of work.
The Useful Idea of Truth
(This is the first post of a new Sequence, Highly Advanced Epistemology 101 for Beginners, setting up the Sequence Open Problems in Friendly AI. For experienced readers, this first post may seem somewhat elementary; but it serves as a basis for what follows. And though it may be conventional in standard philosophy, the world at large does not know it, and it is useful to know a compact explanation. Kudos to Alex Altair for helping in the production and editing of this post and Sequence!)
I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.
-- Danielle Egan
I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.
-- Dale Carrico
What then is truth? A movable host of metaphors, metonymies, and; anthropomorphisms: in short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, and binding.
-- Friedrich Nietzche
The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:
-
The child sees Sally hide a marble inside a covered basket, as Anne looks on.
-
Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.
-
Anne leaves the room, and Sally returns.
-
The experimenter asks the child where Sally will look for her marble.
Children under the age of four say that Sally will look for her marble inside the box. Children over the age of four say that Sally will look for her marble inside the basket.
From First Principles
Related: Truly a Part of You, What Data Generated That Thought
Some Case Studies
The other day my friend was learning to solder and he asked an experienced hacker for advice. The hacker told him that because heat rises, you should apply the soldering iron underneath the work to maximize heat transfer. Seems reasonable, logically inescapable, even. When I heard of this, I thought through to why heat rises and when, and saw that it was not so. I don't remember the conversation, but the punchline is that hot things become less dense, and less dense things float, and if you're not in a fluid, hot fluids can't float. In the case of soldering, the primary mode of heat transfer is conduction through the liquid metal, so to maximize heat transfer, get the tip wet before you stick it in, and don't worry about position.
This is a case of surface reasoning failing because the heuristic (heat rises) was not truly a part of my friend or the random hacker. I want to focus on the actual 5-second skill of going back To First Principles that catches those failures.
Here's another; watch for the 5 second cues and responses: A few years ago, I was building a robot submarine for a school project. We were in the initial concept design phase, wondering what it should look like. My friend Peter said, "It should be wide, because stability is important". I noticed the heuristic "low and wide is stable" and thought to myself "Where does that come from? When is it valid?". In the case of catamarans or sports cars, wide is stable because it increases the lever arm between restoring force (gravity) and support point (wheel or hull), and low makes the tipping point harder to reach. Under water, there is no tipping point, and things are better modeled as hanging from their center of volume. In other words, underwater, the stability criteria is vertical separation, instead of horizontal separation. (More precisely, you can model the submarine as a damped pendulum, and notice that you want to tune the parameters for approximately critical damping). We went back to First Principles and figured out what actually mattered, then went on to build an awesome robot.
Let's review what happened. We noticed a heuristic or bit of qualitative knowledge (wide is stable), and asked "Why? When? How much?", which led us to the quantitative answer, which told us much more precisely exactly what matters (critical damping) and what does not matter (width, maximizing restoring force, etc).
A more Rationality-related example: I recently thought about Courage, and the fact that most people are too afraid of risk (beyond just utility concavity), and as a heuristic we should be failing more. Around the same time, I'd been hounding Michael Vassar (at minicamp) for advice. One piece that stuck with me was "use decision theory". Ok, Courage is about decisions; let's go.
"You should be failing more", they say. You notice the heuristic, and immediately ask yourself "Why? How much more? Prove it from first principles!" "Ok", your forked copy says. "We want to take all actions with positive expected utility. By the law of large numbers, in (non-black-swan) games we play a lot of, observed utility should approximate expected utility, which means you should be observing just as much fail as win on the edge of what you're willing to do. Courage is being well calibrated on risk; If your craziest plans are systematically succeeding, you are not well calibrated and you need to take more risks." That's approximately quantitative, and you can pull out the equations to verify if you like.
Notice all the subtle qualifications that you may not have guessed from the initial advice; (non-pascalian/lln applies, you can observe utility, your craziest plans, just as much fail as win (not just as many, not more)). (example application: one of the best matches for those conditions is social interaction) Those of you who actually busted out the equations and saw the math of it, notice how much more you understand than I am able to communicate with just words.
Ok, now I've named three, so we can play the generalization game without angering the gods.
On the Five-Second Level
Trigger: Notice an attempt to use some bit of knowledge or a heuristic. Something qualitative, something with unclear domain, something that affects what you are doing, something where you can't see the truth.
Action: Ask yourself: What problem does it try to solve (what's its interface, type signature, domain, etc)? What's the specific mechanism of its truth when it is true? In what situations does that hold? Is this one of those? If not, can we derive what the correct result would be in this case? Basically "prove it". Sometimes it will take 2 seconds, sometimes a day or two; if it looks like you can't immediately see it, come up with whatever quick approximation you can and update towards "I don't know what's going on here". Come back later for practice.
It doesn't have to be a formal proof that would convince even the most skeptical mathematician or outsmart even the most powerful demon, but be sure to see the truth.
Without this skill of going back to First Principles, I think you would not fully get the point of truly a part of you. Why is being able to regenerate your knowledge useful? What are the hidden qualifications on that? How does it work? (See what I'm doing here?) Once you see many examples of the kind of expanded and formidably precise knowledge you get from having performed a derivation, and the vague and confusing state of having only a theorem, you will notice the difference. What the difference is, in terms of a derivation From First Principles, is left as an exercise for the reader (ie. I don't know). Even without that, though, having seen the difference is a huge step up.
From having seen the difference between derived and taught knowledge, I notice that one of the caveats of making knowledge Truly a Part of You is that just being able to get it From First Principles is not enough; Actually having done the proof tells you a lot more than simply what the correct theorem is. Do not take my word for it; go do some proofs; see the difference.
So far I've just described something that has been unusually valuable for me. Can it be taught? Will others gain as much? I don't know; I got this one more or less by intellectual lottery. It can probably be tested, though:
Testing the "Prove It" Habit
In school, we had this awesome teacher for thermodynamics and fluid dynamics. He was usually voted best in faculty. His teaching and testing style fit perfectly with my "learn first principles and derive on the fly" approach that I've just outlined above, so I did very well in his classes.
In the lectures and homework, we'd learn all the equations, where they came from (with derivations), how they are used, etc. He'd get us to practice and be good at straightforward application of them. Some of the questions required a bit of creativity.
On the exams, the questions were substantially easier, but they all required creativity and really understanding the first principles. "Curve Balls", we called them. Otherwise smart people found his tests very hard; I got all my marks from them. It's fair to say I did well because I had a very efficient and practiced From First Principles groove in my mind. (This was fair, because actually studying for the test was a reasonable substitute.)
So basically, I think a good discriminator would be to throw people difficult problems that can be solved with standard procedure and surface heuristics, and then some easier problems that require creative application of first principles, or don't quite work with standard heuristics (but seem to).
If your subjects have consistent scores between the two types, they are doing it From First Principles. If they get the standard problems right, but not the curve balls, they aren't.
Examples:
Straight: Bayesian cancer test. Curve: Here's the base rate and positive rate, how good is the test (liklihood ratio)?
Straight: Sunk cost on some bad investment. Curve: Something where switching costs, opportunity for experience make staying the correct thing.
Straight: Monty Hall. Curve: Ignorant Monty Hall.
Etc.
Exercises
Again, maybe this can't be taught, but here's some practice ideas just in case it can. I got substantial value from figuring these out From First Principles. Some may be correct, others incorrect, or correct in a limited range. The point is to use them to point you to a problem to solve; once you know the actual problem, ignore the heuristic and just go for truth:
Science says good theories make bold predictions.
Deriving From First Principles is a good habit.
Boats go where you point them, so just sail with the bow pointed to the island.
People who do bad things should feel guilty.
I don't have to feel responsible for people getting tortured in Syria.
If it's broken, fix it.
(post more in comments)
What are the best books on evolutionary psychology?
I'd like to divide three classes of reasons to read a discipline:
1) You are curious and want to begin reading by something 100-500 pages. I'd go for Pinker's 1990's "How the mind works"
2) You want to screen the whole field, by reading something 500-1500 pages. I definitely recommend David Buss 2004 "The Handbook of Evolutionary Psychology" which defeats the usual SI recommendations on the field
3) You want to know the state of the art of the field, so you really need something that is very recent, say from the last 2 or 3 years at most. This is me. Please help me if you know what should I read. 300-1500 seems a good interval.
Just for a comparative, in Cognitive Neuroscience, 3 would be 2009 "MIT The Cognitive Neurosciences IV"
Post your opinions on what 1 2 and 3 should be for Evolutionary Psychology.
Oh, and if you like Evolutionary Cognitive Neuroscience (a field so new I don't know any of the 3) please post yours too...
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)