Thoughts on designing policies for oneself

74 John_Maxwell_IV 28 November 2012 01:27AM

Note: This was originally written in relation to this rather scary comment of lukeprog's on value drift.  I'm now less certain that operant conditioning is a significant cause of value drift (leaning towards near/far type explanations), but I decided to share my thoughts on the topic of policy design anyway.


Several years ago, I had a reddit problem.  I'd check reddit instead of working on important stuff.  The more I browsed the site, the shorter my attention span got.  The shorter my attention span got, the harder it was for me to find things that were enjoyable to read.  Instead of being rejuvenating, I found reddit to be addictive, unsatisfying, and frustrating.  Every time I thought to myself that I really should stop, there was always just one more thing to click on.

So I installed LeechBlock and blocked reddit at all hours.  That worked really well... for a while.

Occasionally I wanted to dig up something I remembered seeing on reddit.  (This wasn't always bad--in some cases I was looking up something related to stuff I was working on.)  I tried a few different policies for dealing with this.  All of them basically amounted to inconveniencing myself in some way or another whenever I wanted to dig something up.

After a few weeks, I no longer felt the urge to check reddit compulsively.  And after a few months, I hardly even remembered what it was like to be an addict.

However, my inconvenience barriers were still present, and they were, well, inconvenient.  It really was pretty annoying to make an entry in my notebook describing what I was visiting for and start up a different browser just to check something.  I figured I could always turn LeechBlock on again if necessary, so I removed my self-imposed barriers.  And slid back in to addiction.

After a while, I got sick of being addicted again and decided to do something about it (again).  Interestingly, I forgot my earlier thought that I could just turn LeechBlock on again easily.  Instead, thinking about LeechBlock made me feel hopeless because it seemed like it ultimately hadn't worked.  But I did try it again, and the entire cycle then finished repeating itself: I got un-addicted, I removed LeechBlock, I got re-addicted.

This may seem like a surprising lack of self-awareness.  All I can say is: Every second my brain gathers tons of sensory data and discards the vast majority of it.  Narratives like the one you're reading right now don't get constructed on the fly automatically.  Maybe if I had been following orthonormal's advice of keeping and monitoring a record of life changes attempted, I would've thought to try something different.

continue reading »

LW Women- Minimizing the Inferential Distance

58 [deleted] 25 November 2012 11:33PM

Standard Intro

The following section will be at the top of all posts in the LW Women series.

About two months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post.  There is a LOT of material, so I am breaking them down into more manageable-sized themed posts. 

Seven women submitted, totaling about 18 pages. 

Crocker's Warning- Submitters were told to not hold back for politeness. You are allowed to disagree, but these are candid comments; if you consider candidness impolite, I suggest you not read this post

To the submittrs- If you would like to respond anonymously to a comment (for example if there is a comment questioning something in your post, and you want to clarify), you can PM your message and I will post it for you. If this happens a lot, I might create a LW_Women sockpuppet account for the submitters to share.

Standard Disclaimer- Women have many different viewpoints, and just because I am acting as an intermediary to allow for anonymous communication does NOT mean that I agree with everything that will be posted in this series. (It would be rather impossible to, since there are some posts arguing opposite sides!)

Please do NOT break anonymity, because it lowers the anonymity of the rest of the submitters.

continue reading »

Checklist of Rationality Habits

117 AnnaSalamon 07 November 2012 09:19PM
As you may know, the Center for Applied Rationality has run several workshops, each teaching content similar to that in the core sequences, but made more practical, and more into fine-grained habits.

Below is the checklist of rationality habits we have been using in the minicamps' opening session.  It was co-written by Eliezer, myself, and a number of others at CFAR.  As mentioned below, the goal is not to assess how "rational" you are, but, rather, to develop a personal shopping list of habits to consider developing.  We generated it by asking ourselves, not what rationality content it's useful to understand, but what rationality-related actions (or thinking habits) it's useful to actually do.

I hope you find it useful; I certainly have.  Comments and suggestions are most welcome; it remains a work in progress. (It's also available as a pdf.) 
continue reading »

From First Principles

48 [deleted] 27 September 2012 07:04PM

Related: Truly a Part of You, What Data Generated That Thought

Some Case Studies

The other day my friend was learning to solder and he asked an experienced hacker for advice. The hacker told him that because heat rises, you should apply the soldering iron underneath the work to maximize heat transfer. Seems reasonable, logically inescapable, even. When I heard of this, I thought through to why heat rises and when, and saw that it was not so. I don't remember the conversation, but the punchline is that hot things become less dense, and less dense things float, and if you're not in a fluid, hot fluids can't float. In the case of soldering, the primary mode of heat transfer is conduction through the liquid metal, so to maximize heat transfer, get the tip wet before you stick it in, and don't worry about position.

This is a case of surface reasoning failing because the heuristic (heat rises) was not truly a part of my friend or the random hacker. I want to focus on the actual 5-second skill of going back To First Principles that catches those failures.

Here's another; watch for the 5 second cues and responses: A few years ago, I was building a robot submarine for a school project. We were in the initial concept design phase, wondering what it should look like. My friend Peter said, "It should be wide, because stability is important". I noticed the heuristic "low and wide is stable" and thought to myself "Where does that come from? When is it valid?". In the case of catamarans or sports cars, wide is stable because it increases the lever arm between restoring force (gravity) and support point (wheel or hull), and low makes the tipping point harder to reach. Under water, there is no tipping point, and things are better modeled as hanging from their center of volume. In other words, underwater, the stability criteria is vertical separation, instead of horizontal separation. (More precisely, you can model the submarine as a damped pendulum, and notice that you want to tune the parameters for approximately critical damping). We went back to First Principles and figured out what actually mattered, then went on to build an awesome robot.

Let's review what happened. We noticed a heuristic or bit of qualitative knowledge (wide is stable), and asked "Why? When? How much?", which led us to the quantitative answer, which told us much more precisely exactly what matters (critical damping) and what does not matter (width, maximizing restoring force, etc).

A more Rationality-related example: I recently thought about Courage, and the fact that most people are too afraid of risk (beyond just utility concavity), and as a heuristic we should be failing more. Around the same time, I'd been hounding Michael Vassar (at minicamp) for advice. One piece that stuck with me was "use decision theory". Ok, Courage is about decisions; let's go.

"You should be failing more", they say. You notice the heuristic, and immediately ask yourself "Why? How much more? Prove it from first principles!" "Ok", your forked copy says. "We want to take all actions with positive expected utility. By the law of large numbers, in (non-black-swan) games we play a lot of, observed utility should approximate expected utility, which means you should be observing just as much fail as win on the edge of what you're willing to do. Courage is being well calibrated on risk; If your craziest plans are systematically succeeding, you are not well calibrated and you need to take more risks." That's approximately quantitative, and you can pull out the equations to verify if you like.

Notice all the subtle qualifications that you may not have guessed from the initial advice; (non-pascalian/lln applies, you can observe utility, your craziest plans, just as much fail as win (not just as many, not more)). (example application: one of the best matches for those conditions is social interaction) Those of you who actually busted out the equations and saw the math of it, notice how much more you understand than I am able to communicate with just words.

Ok, now I've named three, so we can play the generalization game without angering the gods.

On the Five-Second Level

Trigger: Notice an attempt to use some bit of knowledge or a heuristic. Something qualitative, something with unclear domain, something that affects what you are doing, something where you can't see the truth.

Action: Ask yourself: What problem does it try to solve (what's its interface, type signature, domain, etc)? What's the specific mechanism of its truth when it is true? In what situations does that hold? Is this one of those? If not, can we derive what the correct result would be in this case? Basically "prove it". Sometimes it will take 2 seconds, sometimes a day or two; if it looks like you can't immediately see it, come up with whatever quick approximation you can and update towards "I don't know what's going on here". Come back later for practice.

It doesn't have to be a formal proof that would convince even the most skeptical mathematician or outsmart even the most powerful demon, but be sure to see the truth.

Without this skill of going back to First Principles, I think you would not fully get the point of truly a part of you. Why is being able to regenerate your knowledge useful? What are the hidden qualifications on that? How does it work? (See what I'm doing here?) Once you see many examples of the kind of expanded and formidably precise knowledge you get from having performed a derivation, and the vague and confusing state of having only a theorem, you will notice the difference. What the difference is, in terms of a derivation From First Principles, is left as an exercise for the reader (ie. I don't know). Even without that, though, having seen the difference is a huge step up.

From having seen the difference between derived and taught knowledge, I notice that one of the caveats of making knowledge Truly a Part of You is that just being able to get it From First Principles is not enough; Actually having done the proof tells you a lot more than simply what the correct theorem is. Do not take my word for it; go do some proofs; see the difference.

So far I've just described something that has been unusually valuable for me. Can it be taught? Will others gain as much? I don't know; I got this one more or less by intellectual lottery. It can probably be tested, though:

Testing the "Prove It" Habit

In school, we had this awesome teacher for thermodynamics and fluid dynamics. He was usually voted best in faculty. His teaching and testing style fit perfectly with my "learn first principles and derive on the fly" approach that I've just outlined above, so I did very well in his classes.

In the lectures and homework, we'd learn all the equations, where they came from (with derivations), how they are used, etc. He'd get us to practice and be good at straightforward application of them. Some of the questions required a bit of creativity.

On the exams, the questions were substantially easier, but they all required creativity and really understanding the first principles. "Curve Balls", we called them. Otherwise smart people found his tests very hard; I got all my marks from them. It's fair to say I did well because I had a very efficient and practiced From First Principles groove in my mind. (This was fair, because actually studying for the test was a reasonable substitute.)

So basically, I think a good discriminator would be to throw people difficult problems that can be solved with standard procedure and surface heuristics, and then some easier problems that require creative application of first principles, or don't quite work with standard heuristics (but seem to).

If your subjects have consistent scores between the two types, they are doing it From First Principles. If they get the standard problems right, but not the curve balls, they aren't.

Examples:

Straight: Bayesian cancer test. Curve: Here's the base rate and positive rate, how good is the test (liklihood ratio)?

Straight: Sunk cost on some bad investment. Curve: Something where switching costs, opportunity for experience make staying the correct thing.

Straight: Monty Hall. Curve: Ignorant Monty Hall.

Etc.

Exercises

Again, maybe this can't be taught, but here's some practice ideas just in case it can. I got substantial value from figuring these out From First Principles. Some may be correct, others incorrect, or correct in a limited range. The point is to use them to point you to a problem to solve; once you know the actual problem, ignore the heuristic and just go for truth:

Science says good theories make bold predictions.

Deriving From First Principles is a good habit.

Boats go where you point them, so just sail with the bow pointed to the island.

People who do bad things should feel guilty.

I don't have to feel responsible for people getting tortured in Syria.

If it's broken, fix it.

(post more in comments)

High School Lecture - Report

19 Xece 23 September 2012 02:06AM

This post is a followup report to this.

 

On Friday's lecture, I was able to briefly cover several topics as an introduction. They centred around rationality (what it is), truth (what it is and why we should pursue it), and Newcomb's Paradox.

The turnout was as expected (6 out of a total 7 group members, with 1 having other obligations that day). Throughout the talk I would ask for some proposed definitions before giving them. It is unfortunate when I asked what "truth" is, mysterious answers such as "truth is the meaning of life", and "truth is the pursuit of truth". When asked what they meant by their answers, they either rephrased what they said with the same vagueness or were unable to give an answer. One member, however, did say that "Truth is what is real", only to have other members ask what he meant by "real". It offered a rather nice opportunity for a map-and-territory tangent before giving some version of "The Simple Truth".

I used the definitions given in 'What Do We Mean By "Rationality"?' to describe epistemic and instrumental rationality, and gave several examples as to what rationality is not (Dr. Spock, logic/reason, etc). As a practice, I introduced Newcomb's Paradox. There was ample debate with an even split between one-box and two-boxers. Due to time constraints, we weren't able to come to a conclusion (although the one-boxing side was making a stronger argument). By the end of lunch period, everyone seemed to have a good grasp that rationality is simply making the best decision to achieve one's goals, whatever they may be.

Overall, I'd say it was successful. My next turn is on October 3rd, and apart from a little review, I'm going to go over the 5-second level, and use of words. Saying what they mean is something we as a group need to work on.

Eliezer's Sequences and Mainstream Academia

99 lukeprog 15 September 2012 12:32AM

Due in part to Eliezer's writing style (e.g. not many citations), and in part to Eliezer's scholarship preferences (e.g. his preference to figure out much of philosophy on his own), Eliezer's Sequences don't accurately reflect the close agreement between the content of The Sequences and work previously done in mainstream academia.

I predict several effects from this:

  1. Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
  2. Some readers will mistakenly think Eliezer's Sequences are more original than they really are.
  3. If readers want to know more about the topic of a given article, it will be more difficult for them to find the related works in academia than if those works had been cited in Eliezer's article.

I'd like to counteract these effects by connecting the Sequences to the professional literature. (Note: I sort of doubt it would have been a good idea for Eliezer to spend his time tracking down more references and so on, but I realized a few weeks ago that it wouldn't take me much effort to list some of those references.)

I don't mean to minimize the awesomeness of the Sequences. There is much original content in them (edit: probably most of their content is original), they are engagingly written, and they often have a more transformative effect on readers than the corresponding academic literature.

I'll break my list of references into sections based on how likely I think it is that a reader will have missed the agreement between Eliezer's articles and mainstream academic work.

(This is only a preliminary list of connections.)

continue reading »

High School Lectures

8 Xece 15 September 2012 06:05AM

Just recently at my high school, a group of classmates and I started a science club. A major component of this is listening and giving peer lectures on topics of physics, math, computer science, etc. I picked a topic a bit off to the side: philosophy and decision making. Naturally, this includes rationality. My plan is to start with something based off the sequences, specifically "How to Actually Change Your Mind" and "A Human's Guide to Words".

I was hoping the Less Wrong community could give me some suggestions, tips, or even alternative ways to approach this. There is no end goal, we just want to learn more and think better. All our members are among the top 5% academically of their own grade. Most of us are seniors and have finished high school math, taking AP Calculus this year. We have covered basic statistics and Bayes' Theorem, but only applied it to the Disease Problem.

Any help or ideas are appreciated.

 

Update: Thank you for all these suggestions! They are incredibly helpful for me. I will attempt to make a recording of the lecture period if possible. I will make another discussion post sometime next weekend (the first lecture is next Friday) to report how it went.

 

Update 2: Report here.