How I changed my exercise habits
In June 2013, I didn’t do any exercise beyond biking the 15 minutes to work and back. Now, I have a robust habit of hitting the gym every day, doing cardio and strength training. Here are the techniques I used to do get from not having the habit to having it, some of them common wisdom and some of them my own ideas. Consider this post a case study/anecdata in what worked for me. Note: I wrote these ideas down around August 2013 but didn’t post them, so my memory was fresh at the time of writing.
1. Have a specific goal. Ideally this goal should be reasonably achievable and something you can see progress toward over medium timescales. I initially started exercising because I wanted more upper body strength to be better at climbing. My goal is “become able to do at least one pull up, or more if possible”.
Why it works: if you have a specific goal instead of a vague feeling that you ought to do something or that it’s what a virtuous person would do, it’s harder to make excuses. Skipping work with an excuse will let you continue to think of yourself as virtuous, but it won’t help with your goal. For this to work, your goal needs to be something you actually want, rather than a stand-in for “I want to be virtuous.” If you can’t think of a consequence of your intended habit that you actually want, the habit may not be worth your time.
2. Have a no-excuses minimum. This is probably the best technique I’ve discovered. Every day, with no excuses, I went to the gym and did fifty pull-downs on one of the machines. After that’s done, I can do as much or as little else as I want. Some days I would do equivalent amounts of three other exercises, some days I would do an extra five reps and that’s it.
Why it works: this one has a host of benefits.
* It provides a sense of freedom: once I’m done with my minimum, I have a lot of choice about what and how much to do. That way it feels less like something I’m being forced into.
* If I’m feeling especially tired or feel like I deserve a day off, instead of skipping a day and breaking the habit I tell myself I’ll just do the minimum instead. Often once I get there I end up doing more than the minimum anyway, because the real thing I wanted to skip was the inconvenience of biking to the gym.
3. If you raise the minimum, do it slowly. I have sometimes raised the bar on what’s the minimum amount of exercise I have to do, but never to as much or more than I was already doing routinely. If you start suddenly forcing yourself to do more than you were already doing, the change will be much harder and less likely to stick than gradually ratcheting up your commitment.
3. Don’t fall into a guilt trap. Avoid associating guilt with doing the minimum, or even with missing a day.
Why it works: feeling guilty will make thinking of the habit unpleasant, and you’ll downplay how much you care about it to avoid the cognitive dissonance. Especially, if you only do the minimum, tell yourself “I did everything I committed to do.” Then when you do more than the minimum, feel good about it! You went above and beyond. This way, doing what you committed to will sometimes include positive reinforcement, but never negative reinforcement.
4. Use Timeless Decision Theory and consistency pressure. Credit for this one goes to this post by user zvi. When I contemplate skipping a day at the gym, I remember that I’ll be facing the same choice under nearly the same conditions many times in the future. If I skip my workout today, what reason do I have to believe that I won’t skip it tomorrow?
Why it works: Even when the benefits of one day’s worth of exercise don’t seem like enough motivation, I know my entire habit that I’ve worked to cultivate is at stake. I know that the more days I go to the gym the more I will see myself as a person who goes to the gym, and the more it will become my default action.
5. Evaluate your excuses. If I have what I think is a reasonable excuse, I consider how often I’ll skip the gym if I let myself skip it whenever I have that good of an excuse. If letting the excuse hold would make me use it often, I ignore it.
Why it works: I based this technique on this LW post
6. Tell people about it. The first thing I did when I made my resolution to start hitting the gym was telling a friend whose opinion I cared about. I also made a comment on LW saying I would make a post about my attempt at forming a habit, whether it succeeded or failed. (I wrote the post and forgot to post it for over a year, but so it goes.)
Why it works: Telling people about your commitment invests your reputation in it. If you risk being embarrassed if you fail, you have an extra motivation to succeed.
I expect these techniques can be generalized to work for many desirable habits: eating healthy, spending time on social interaction; writing, coding, or working on a long-term project; being outside getting fresh air, etc.
Identity and quining in UDT
Outline: I describe a flaw in UDT that has to do with the way the agent defines itself (locates itself in the universe). This flaw manifests in failure to solve a certain class of decision problems. I suggest several related decision theories that solve the problem, some of which avoid quining thus being suitable for agents that cannot access their own source code.
EDIT: The decision problem I call here the "anti-Newcomb problem" already appeared here. Some previous solution proposals are here. A different but related problem appeared here.
Updateless decision theory, the way it is usually defined, postulates that the agent has to use quining in order to formalize its identity, i.e. determine which portions of the universe are considered to be affected by its decisions. This leaves the question of which decision theory should agents that don't have access to their source code use (as humans intuitively appear to be). I am pretty sure this question has already been posed somewhere on LessWrong but I can't find the reference: help? It also turns out that there is a class of decision problems for which this formalization of identity fails to produce the winning answer.
When one is programming an AI, it doesn't seem optimal for the AI to locate itself in the universe based solely on its own source code. After all, you build the AI, you know where it is (e.g. running inside a robot), why should you allow the AI to consider itself to be something else, just because this something else happens to have the same source code (more realistically, happens to have a source code correlated in the sense of logical uncertainty)?
Consider the following decision problem which I call the "UDT anti-Newcomb problem". Omega is putting money into boxes by the usual algorithm, with one exception. It isn't simulating the player at all. Instead, it simulates what would a UDT agent do in the player's place. Thus, a UDT agent would consider the problem to be identical to the usual Newcomb problem and one-box, receiving $1,000,000. On the other hand, a CDT agent (say) would two-box and receive $1,000,1000 (!) Moreover, this problem reveals UDT is not reflectively consistent. A UDT agent facing this problem would choose to self-modify given the choice. This is not an argument in favor of CDT. But it is a sign something is wrong with UDT, the way it's usually done.
The essence of the problem is that a UDT agent is using too little information to define its identity: its source code. Instead, it should use information about its origin. Indeed, if the origin is an AI programmer or a version of the agent before the latest self-modification, it appears rational for the precursor agent to code the origin into the successor agent. In fact, if we consider the anti-Newcomb problem with Omega's simulation using the correct decision theory XDT (whatever it is), we expect an XDT agent to two-box and leave with $1000. This might seem surprising, but consider the problem from the precursor's point of view. The precursor knows Omega is filling the boxes based on XDT, whatever the decision theory of the successor is going to be. If the precursor knows XDT two-boxes, there is no reason to construct a successor that one-boxes. So constructing an XDT successor might be perfectly rational! Moreover, a UDT agent playing the XDT anti-Newcomb problem will also two-box (correctly).
To formalize the idea, consider a program called the precursor which outputs a new program
called the successor. In addition, we have a program
called the universe which outputs a number
called utility.
Usual UDT suggests for the following algorithm:
(1)
Here, is the input space,
is the output space and the expectation value is over logical uncertainty.
appears inside its own definition via quining.
The simplest way to tweak equation (1) in order to take the precursor into account is
(2)
This seems nice since quining is avoided altogether. However, this is unsatisfactory. Consider the anti-Newcomb problem with Omega's simulation involving equation (2). Suppose the successor uses equation (2) as well. On the surface, if Omega's simulation doesn't involve 1, the agent will two-box and get $1000 as it should. However, the computing power allocated for evaluation the logical expectation value in (2) might be sufficient to suspect
's output might be an agent reasoning based on (2). This creates a logical correlation between the successor's choice and the result of Omega's simulation. For certain choices of parameters, this logical correlation leads to one-boxing.
The simplest way to solve the problem is letting the successor imagine that produces a lookup table. Consider the following equation:
(3)
Here, is a program which computes
using a lookup table: all of the values are hardcoded.
For large input spaces, lookup tables are of astronomical size and either maximizing over them or imagining them to run on the agent's hardware doesn't make sense. This is a problem with the original equation (1) as well. One way out is replacing the arbitrary functions with programs computing such functions. Thus, (3) is replaced by
(4)
Where is understood to range over programs receiving input in
and producing output in
. However, (4) looks like it can go into an infinite loop since what if the optimal
is described by equation (4) itself? To avoid this, we can introduce an explicit time limit
on the computation. The successor will then spend some portion
of
performing the following maximization:
(4')
Here, is a program that does nothing for time
and runs
for the remaining time
. Thus, the successor invests
time in maximization and
in evaluating the resulting policy
on the input it received.
In practical terms, (4') seems inefficient since it completely ignores the actual input for a period of the computation. This problem exists in original UDT as well. A naive way to avoid it is giving up on optimizing the entire input-output mapping and focus on the input which was actually received. This allows the following non-quining decision theory:
(5)
Here is the set of programs which begin with a conditional statement that produces output
and terminate execution if received input was
. Of course, ignoring counterfactual inputs means failing a large class of decision problems. A possible win-win solution is reintroducing quining2:
(6)
Here, is an operator which appends a conditional as above to the beginning of a program. Superficially, we still only consider a single input-output pair. However, instances of the successor receiving different inputs now take each other into account (as existing in "counterfactual" universes). It is often claimed that the use of logical uncertainty in UDT allows for agents in different universes to reach a Pareto optimal outcome using acausal trade. If this is the case, then agents which have the same utility function should cooperate acausally with ease. Of course, this argument should also make the use of full input-output mappings redundant in usual UDT.
In case the precursor is an actual AI programmer (rather than another AI), it is unrealistic for her to code a formal model of herself into the AI. In a followup post, I'm planning to explain how to do without it (namely, how to define a generic precursor using a combination of Solomonoff induction and a formal specification of the AI's hardware).
1 If Omega's simulation involves , this becomes the usual Newcomb problem and one-boxing is the correct strategy.
2 Sorry agents which can't access their own source code. You will have to make do with one of (3), (4') or (5).
How to become a PC?
"Cryonics has a 95% chance of failure, by my estimation; it would be downright /embarrassing/ to die on the day before real immortality is discovered. Thus, I want to improve my general health and longevity."
That thought has gotten me through three weeks of gradually increasing exercise and diet improvement (I'm eating an apple right now) - but my enthusiasm is starting to flag. So I'm looking for new thoughts that will help me keep going, and keep improving. A few possibilities that I've thought of:
Pride: "If I'm so smart, then I should be able to do /better/ than those other people who don't even know about Bayesian updates, let alone the existence of akrasia..."
Sloth: "If I stop now, it's going to be /so much/ harder and more painful to start up again, instead of just keeping on keeping on..."
Desire: "I already like hiking and camping - if I keep this up, I'll be able to carry enough weight to finally take that long trip I've occasionally considered..."
Curiosity: "I'm as geeky a nerd as you can find. I wonder how far I can hack my own body?"
Pride again: "I already keep a hiker's first-aid kit in my pocket, and make other preparations for events that happen rarely. How stupid do I have to be not to put at least that much effort into making my everyday life easier?"
Does anyone have any experience in such self-motivation? Does this set of mental tricks seem like a sufficiently viable approach? Are there any other approaches that seem worth a shot?
Open Thread: How much strategic thinking have you done recently?
I'm tired of people never, ever, ever, EVER stopping 2 hours to 1) Think of what their goals are 2)Checking if their current path leads to desired goals 3)Correcting course and 4)Creating a system to verify, in the future, whether goals are being achieved. I'm really tired of that. Really.
... so we may want to remind and encourage each other to do so, and exchange tips!
- Have you thought about your life goals recently?
- Do you know what your long-term and medium-term goals are?
- If you're facing big problems or annoyances, have you thought of ways of solving them?
- Do you have a system you use regularly that pushes you in the right direction?
See also: Humans are not automatically strategic, levels of action.
Effective Rationality Training Online
Article Prerequisite: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality
Introduction
The goal of this post is to explore the idea of rationality training, feedback and ideas are greatly appreciated.
Less Wrong’s stated mission is to help people become more rational, and it has made progress toward that goal. Members read and discuss useful ideas on the internet, get instant feedback because of the voting system, and schedule meetups with other members. Less Wrong also helps attract more people to rationality.
Less Wrong helps with sharing ideas, but it fails to help people put elements of epistemic and instrumental rationality into practice. This is a serious problem, but it would be hard to fix without altering the core functionality of Less Wrong.
Having separate websites for reading and discussing ideas and then actually using those ideas would improve the real world performance of the Less Wrong community while maintaining the idea discussion, “marketing”, and other benefits of the Less Wrong website.
How to create a useful website for self improvement
1. Knowledge Management
When reading blogs, people only see recent posts and those posts are not significantly revised. A wiki would allow for the creation of a large body of organized knowledge that is frequently revised. Each wiki post would have a description, benefits of the topic described, resources to learn the topic, user submitted resources to learn the topic, and reviews of each resource. Posts would be organized hierarchically and voted on for usefulness to help readers effectively improve what they are looking for. Users could share self-improvement plans to help others improve effectiveness in general or in a specific topic as quickly as possible.
2. Effective Learning
Resources to learn topics should be arranged or written for effective skill acquisition, and there may be different resource categories like exercises for deliberate practice or active recall questions for spaced repetition.
3. Quality Contributors
Contributors would, at the very least, need to be familiar with how to write articles that supported the skill acquisition process agreed upon by the entire community. Required writing and research skills would produce higher quality work. I am not sure if being a rationalist would improve the quality of articles.
Problems
1. Difficult requirements
The number of prerequisites necessary to contribute to and use the wiki would really lower the amount of people who will be able to benefit from the wiki. It's a trade off between effectiveness and popularity. What elements should be included to maximize the effectiveness of the website?
2. Interest
There has to be enough interest in the website, or else a different project should be started instead. How many people in the Less Wrong community, and the world at large, would be interested in self improvement and rationality?
3. Increasing the effectiveness of non altruistic people
How much of the target audience wants to improve the world? If most do not, then the wiki would essentially be a net negative on the world. What should the criteria be to view and contribute to the wiki? Perhaps only Less Wrong members should be able to view and edit the wiki, and contributors must read a quick start guide and pass a quick test before being allowed to post.
Optimizing for attractiveness
I want to spend a substantial fraction of my time optimizing myself in the direction of being more attractive to females, and I'd really appreciate your suggestions on how to do so.
Why
It should be pretty self-explanatory, but in case you're wondering: relationships are a big part of personal happiness, and where I am now, I feel more inclined toward increasing the number and variability of short- or middle-term sexual relationships rather than just picking a girl who wants to be my wife and run with it. But at the moment women aren't exactly chasing me down the streets, so I want to offer them a more pleasant experience of my company than what it already is.
Mind-killing
I sincerely think this post should provoke none of the above. I'm not asking for ways to trick women into liking me, nor about gender differences about what males prefer over females, etc. Please try really hard to avoid mind-killing subjects into your comments. I'm 'just' asking for ways to change myself into being a more sexually attractive human being.
Caveat(s)
I'm aware of the dichotomy lying around: attraction can be created vs attraction can only be amplified. In both cases there should be at least something that can be done.
I'm also aware that some people strongly dislike posts full of personal details, so I will try to keep them at minimum, while at the same time trying to provide the necessary description of my situation.
I would like
Try to aim for advice on stable improvements, about aspects that are proven to be sexually attractive to straight females, in the age range of 20 to 40.
For example, I know that height or facial symmetry are proven to result universally attractive, but I cannot really change that, and sole-lifts or make-up are so short-term solutions to border on 'tricking women' (yes, I know that women use those tricks too, I simply would like to invest my time better).
My situation
This is the shortest possible description: I'm a straight male in my thirties, heavily overweight, living in Italy in a 20k people town, with a job paying me about $20k a year.
If you think you need more details ask for them in the comments or PM me.
What I'm already doing/planning to do
The first obvious choice is getting fit, although it's about two years I'm trying different diets with no results, so I'd really need pointers in that direction. I've also heard about training programs that tells you to concentrate on shoulders, because apparently shoulder-to-waist ratio of 1.5 or more is especially attractive.
I've also been told multiple times by multiple sources that women values confidence, competence and leadership. I understand the confidence part in being able to express without embarassment your interest (but still in a socially graceful manner), but I would really like pointers about what area of my life I could engage to become more competent or a leader. In what domains women like competence/leadership?
My only hobby at the moment are the game of Go and dabbing in math/logics/AI, which, as fascinating as they are, are seldom considered very attractive.
What I'm not sure about
Is fashion important? I understand that I need to dress well for my built, but I would like to know if a Versace button down shirt is more attractive than a plain brand one.
False beliefs
Do you think am I doing the right thing? Or am I wrong in my search for attractiveness? Should I concentrate on something totally unrelated? Dose the physical aspect matter or I should concentrate more on character? Am I completely off track?
If you think I'm grossly mistaken, in the name of Omega let me know!
Downvote
If you think this post doesn't belong in a community devoted rationality and self-improvement, feel free to downvote, but at least try to indicate a way to better phrase the problem or point me to another community I can ask the same question.
Thank you very much!
My simple hack for increased alertness and improved cognitive functioning: very bright light
This is a simple idea that I came up with by myself. I was looking for a means to enter high functioning lots-of-beta-waves modes without the use of chemical stimulants. What I found was that very bright light works really, really well.
I got the brightest light bulbs I could get cheaply. 105 watts of incandescents with halogen gas, billed as the equivalent of 130 watts of incandescent light. And I got an adaptor like this that lets me screw four of those into the same socket in the ceiling. The result is about as painful to look at as the sun. It makes my (small) room brighter than a clear summer's day at my latitude and slightly brighter than a supermarket.
I guess it affects adenosine much like caffeine does because that's what it feels like. Yet unlike caffeine, it can be rapidly turned on and off, literally with the flip of a switch.
For waking up in the morning, I find bright light more effective than a 200mg caffeine tablet, although my caffeine tolerance is moderate for a scientist.
I have not compared the effects of very bright light to modafinil, which requires a prescription in my country.
When under this amount of light, I need to remind myself to go to bed, because I tire about three hours later than with common luminosity. Yet once I switch it off, I can usually sleep within a few minutes, as (I'm guessing) a flood of unblocked adenosine suddenly overwhelms me. I used to have those unproductive late hours where I was too awake to sleep but too tired to be smart. I don't have those anymore.
You've probably heard of light therapy, which uses light to help manage seasonal affective disorder. I don't have that issue, but I definitely notice that the light does improve my mood. (Maybe that's simply because I like to function well.) I'm pretty sure the expensive "light therapy bulbs" you can get are scams, because the color of the light doesn't actually make a difference. The amount of light does.
One nice side benefit is that it keeps me awake while meditating, so I don't need the upright posture that usually does that job. Without the need for an upright posture, I can go beyond two hours straight, which helps enter more profoundly altered states.
After about 10 months of almost daily use of this lighting, I have not noticed any decrease in effectiveness. I do notice I find normally-lit rooms comparatively gloomy, and have an increasingly hard time understanding why people tolerate that. Supermarkets and offices are brightly lit to make the rats move faster - why don't we do that at our homes and while we're at it, amp it up even further? After all, our brains were made for the African savanna, which during the day is a lot brighter than most apartments today.
Since everyone can try this for a few bucks, I hope some of you will. If you do, please provide feedback on whether it works as well for you as it does for me. Any questions?
How confident should we be?
What should a rationalist do about confidence? Should he lean harder towards
- relentlessly psyching himself up to feel like he can do anything, or
- having true beliefs about his abilities in all areas, coldly predicting his likelihood of success in a given domain?
I don't want to falsely construe these as dichotomous. The real answer will probably dissolve 'confidence' into smaller parts and indicate which parts go where. So which parts of 'confidence' correctly belong in our models of the world (which must never be corrupted) or our motivational systems (which we may cut apart and put together however helps us achieve our goals)? Note that this follows the distinction between epistemic and instrumental rationality.
Eliezer offers a decision criterion in The Sin of Underconfidence:
Does this way of thinking make me stronger, or weaker? Really truly?
It makes us stronger to know when to lose hope already, and it makes us stronger to have the mental fortitude to kick our asses into shape so we can do the impossible. Lukeprog prescribes boosting optimism "by watching inspirational movies, reading inspirational biographies, and listening to motivational speakers." That probably makes you stronger too.
But I don't know what to do about saying 'I can do it' when the odds are against me. What do you do when you probably won't succeed, but believing that Heaven's army is at your back would increase your chances?
My default answer has always been to maximize confidence, but I acted this way long before I discovered rationality, and I've probably generated confidence for bad reasons as often as I have for good reasons. I'd like to have an answer that prescribes the right action, all of the time. I want know when confidence steers me wrong, and know when to stop increasing my confidence. I want the real answer, not the historically-generated heuristic.
I can't help but feeling like I'm missing something basic here. What do you think?
[LINK] Daniel Pink talks about Motivation
Little over a week ago my work watched this video for a "self-improvement" seminar.
I hadn't seen this linked anywhere on LW yet, and thought it might be relevant, given lukeprogs' article on motivation.
Software for Critical Thinking, Prof. Geoff Cumming
Prof. Geoff Cumming has done some interesting work. Of particular relevance to the LW community, he has studied software for enhancing critical thinking.
My past research: I worked on Computer tools for enhancing critical thinking, with Tim van Gelder. We studied argument mapping, and Tim’s wonderful Reason!Able software for critical thinking. This has proved very effective in university and school classrooms as the basis for effective enhancement of critical thinking. In an ARC-funded project we evaluated the software and Tim’s related educational materials. We found evidence that a one semester critical thinking course, based on Reason!Able, gives a very substantial increase—considerably greater than reported in previous evaluations of critical thinking courses—in performance on standardised tests.
Tim’s software has been further developed by his company Austhink Software, and is now available commercially as Rationale and bCisive: both are fabulous! http://www.austhink.org/ http://bcisive.austhink.com/
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)