Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The barriers to the task

-7 Elo 18 August 2016 07:22AM

Original post: http://bearlamp.com.au/the-barriers-to-the-task/


For about two months now I have been putting in effort to run in the mornings.  To make this happen, I had to take away all the barriers to me wanting to do that.  There were plenty of them, and I failed to leave my house plenty of times.  Some examples are:

Making sure I don't need correct clothes - I leave my house shirtless and barefoot, and grab my key on the way out.  

Pre-commitment to run - I take my shirt off when getting into bed the night before, so I don't even have to consider the action in the morning when I roll out of bed.

Being busy in the morning - I no longer plan any appointments before 11am.  Depending on the sunrise (I don't use alarms), I wake up in the morning, spend some time reading things, then roll out of bed to go to the toilet and leave my house.  In Sydney we just passed the depths of winter and it's beginning to get light earlier and earlier in the morning.  Which is easy now; but was harder when getting up at 7 meant getting up in the dark.  

There were days when I would wake up at 8am, stay in bed until 9am, then realise if I left for a run (which takes around an hour - 10am), then came back to have a shower (which takes 20mins - 10:20), then left to travel to my first meeting (which can take 30mins 10:50).  That means if anything goes wrong I can be late to an 11am appointment.  But also - if I have a 10am meeting I have to skip my run to get there on time.

Going to bed at a reasonable hour - I am still getting used to deciding not to work myself ragged.  I decided to accept that sleep is important, and trust to let my body sleep as long as it needs.  This sometimes also means that I can successfully get bonus time by keeping healthy sleep habits.  But also - if I go to sleep after midnight I might not get up until later, which means I compromise my "time" to go running by shoving it into other habits.

Deciding where to run - google maps, look for local parks, plan a route with the least roads and least traffic.  I did this once and then it was done.  It was also exciting to measure the route and be able to run further and further each day/week/month.


What's in your way?

If you are not doing something that you think is good and right (or healthy, or otherwise desireable) there are likely things in your way.  If you just found out about an action that is good, well and right and there is nothing stopping you from doing it; great.  You are lucky this time - Just.Do.It.

If you are one of the rest of us; who know that:

  • daily exercise is good for you
  • The right amount of sleep is good for you
  • Eating certain foods are better than others
  • certain social habits are better than others
  • certain hobbies are more fulfilling (to our needs or goals) than others

And you have known this a while but still find yourself not taking the actions you want.  It's time to start asking what is in your way.  You might find it on someone else's list, but you are looking for the needle in the haystack.  

You are much better off doing this (System 2 exercise):

  1. take 15 minutes with pencil and paper.
  2. At the top write, "I want to ______________".
  3. If you know that's true you might not need this step - if you are not sure - write out why it might be true or not true.
  4. Write down the barriers that are in the way of you doing the thing.  think;
    • "can I do this right now?" (might not always be an action you can take while sitting around thinking about it - i.e. eating different foods)
    • "why can't I just do this at every opportunity that arises?"
    • "how do I increase the frequency of opportunities?"
  5. Write out the things you are doing instead of that thing.
    These things are the barriers in your way as well.
  6. For each point - consider what you are going to do about them.

Questions:

  • What actions have you tried to take on?
  • What barriers have you encountered in doing so?
  • How did you solve that barrier?
  • What are you struggling with taking on in the future?

Meta: this borrows from the Immunity to Change process, that can be best read about in the book, "right weight, right mind".  It also borrows from CFAR style techniques like resolve cycles (also known as focused grit), hamming questions, murphy-jitsu.

Meta: this took one hour to write.

Cross posted to lesswrong: http://lesswrong.com/lw/nuq

Should we enable public binding precommitments?

0 capybaralet 31 July 2016 07:47PM

The ability to make arbitrary public binding precommitments seems like a powerful tool for solving coordination problems.

We'd like to be able to commit to cooperating with anyone who will cooperate with us, as in the open-source prisoner's dilemma (although this simple case is still an open problem, AFAIK).  But we should be able to do this piece-meal.

It seems like we are moving in this direction, with things like Etherium that enable smart contracts.  Technology should enable us to enforce more real-world precommitments, since we'll be able to more easily monitor and make public our private data.

Optimistically, I think this could allow us to solve coordination issues robustly enough to have a very low probability of any individual actor making an unsafe AI.  This would require a lot of people to make the right kind of precommitments.

I'm guesing there are a lot of potential downsides and ways it could go wrong, which y'all might want to point out.

Why You Should Be Public About Your Good Deeds

11 Gleb_Tsipursky 30 December 2015 04:06AM

(This will be mainly of interest to Effective Altruists, and is cross-posted on the Giving What We Can blog, the Intentional Insights blog, and the EA Forum)

 

When I first started donating, I did so anonymously. My default is to be humble and avoid showing off. I didn’t want others around me to think that I have a stuffed head and hold too high an opinion of myself. I also didn’t want them to judge my giving decisions, as some may have judged them negatively. I also had cached patterns of associating sharing about my good deeds publicly with feelings that I get from commercials, of self-promotion and sleaziness.

I wish I had known back then that I could have done much more good by publicizing my donations and other goods deeds, such as signing the Giving What We Can Pledge to donate 10% of my income to effective charities, or being public about my donations to CFAR on this LW forum post.

Why did I change my mind about being public? Let me share a bit of my background to give you the appropriate context.

As long as I can remember, I have been interested in analyzing how and why individuals and groups evaluated their environment and made their decisions to reach their goals – rational thinking. This topic became the focus of my research as a professor at Ohio State in the history of science, studying the intersection of psychology, cognitive neuroscience, behavioral economics, and other fields.

While most of my colleagues focused on research, I grew more passionate about sharing my knowledge with others, focusing my efforts on high-quality, innovative teaching. I perceived my work as cognitive altruism, sharing my knowledge about rational thinking, and students expressed much appreciation for my focus on helping them make better decisions in their lives. Separately, I engaged in anonymous donations to causes such as poverty alleviation.

Yet over time, I realized that by teaching only in the classroom, I would have a very limited impact, since my students were only a small minority of the population I could potentially reach. I began to consult academic literature on how to spread my knowledge broadly. Through reading classics in the field of social influence such as Influence: The Psychology of Persuasion and Made To Stick, I learned a great many strategies to multiply the impact of my cognitive altruism work, as well as my charitable giving.

One of the most important lessons was the value of being public about my activities. Both Influence: The Psychology of Persuasion and subsequent research showed that our peers deeply impact our thoughts, feelings, and behaviors. We tend to evaluate ourselves based on what our peers think of us, and try to model behaviors that will cause others to have positive opinions about us. This applies not only to in-person meetings, but also online communities.

A related phenomenon, social proof, illustrates how we evaluate appropriate behavior based on how we see others behaving. However, research also shows that people who exhibit more beneficial behaviors tend to avoid expressing themselves to those with less beneficial behaviors, resulting in overall social harm.

Learning about the importance of being public, including in online communities that reach far more people than in-person communities, especially by people engaging in socially beneficial habits, led to a deep transformation in my civic engagement. While it was not easy to overcome my shyness, I realized I had to do it if I wanted to optimize my positive impact on the world – both in cognitive altruism and in effective giving.

I shared this journey of learning and transformation with my wife, Agnes Vishnevkin, an MBA and non-profit professional. Together, we decided to co-found a nonprofit dedicated to spreading rational thinking and effective giving to a broad audience using research-based strategies for maximizing social impact, Intentional Insights. Uniting with others committed to this mission, we write articles, blogs, make videos, author books, program apps, and collaborate with other organizations to share these ideas widely.

I also rely on research to make other decisions, such as my decision to take the Giving What We Can pledge. The strategy of precommitment is key here – we make a decision in a state where we have the time to consider their consequences in the long term, and specifically wish to constrain the options of our future selves. That way, we can plan within a narrowed range of options and make the best possible use of the resources available to us.

Thus, I can plan to live on 90% of my income over my lifetime, and plan to decrease some of my spending in the long term so that I can give to charities that I believe are most effective for making the kind of impact I want to see in the world.

Knowing about the importance of publicizing my good deeds and commitments, I recognize that I can do much more good by sharing my decision to take the pledge with others. All of us have friends, and the large majority of us have social media channels and we all have the power to be public about our good deeds. You can also consider fundraising for effective charities, and being an advocate for effective altruism in your community. 

According to the scholarly literature, by being public about our good deeds we can bring about much good in the world. Even though it may not feel as tangible as direct donations, sharing with others about our good deeds and supporting others doing so may in the end allow us to do even more good.

Fixing akrasia: damnation to acausal hell

2 joaolkf 03 October 2013 10:34PM

DISCLAIMER: This topic is related to a potentially harmful memetic hazard, that has been rightly banned from Less Wrong. If you don't know what is, it is more likely you will be fine than not, but be advised. If do know, do not mention it in the comments.


 

Abstract: The fact that humans cannot precommit very well might be one of our defences against acausal trades. If transhumanists figure out how to beat akrasia by some sort of drug or brain tweaks, that might make them much better at precommitment, and thus more vulnerable. That means solving akrasia might be dangerous, at least until we solve blackmail. If the danger is bad enough, even small steps should be considered carefully.



Strong precommitment and building detailed simulations of other agents are two relevant capabilities humans currently don't have. These capabilities have some unusual consequences for games. Most relevant games only arise when there is a chance of monitoring, commitment and multiple interactions. Hence being in a relevant game often implies cohabiting casual connected space-time regions with other agents. Nevertheless, being able to build detailed simulations of agents allows one to vastly increase the subjective probably this particular agent will have that his next observational moment will be under one's control iff the agent have access to some relevant areas of the logical game theoretic space. This doesn't seem desirable from this agent's perspective, it is extremely asymmetrical and allows more advanced agents to enslave less advanced ones even if they don't cohabit casual connected regions of the universe. Being able to be acausally reached by powerful agent who can simulate 3^^^3 copies of you, but against which you cannot do much is extremely undesirable.

However, and more generally, regions of the block universe can only be in a game with non-cohabiting regions if they are both agents and if they can strong precommit. Any acausal trade depends on precommitment, this is the only way an agreement can go across space-time, it is done on the game-theoretical possibilities space - as I am calling it. In the case I am discussing, a powerful agent would only have reason to even consider acausal trading with an agent if that agent can precommit. Otherwise, there is no other way of ensuring acausal cooperation. If the other agent cannot, beforehand, understand that due to the peculiarities of the set of possible strategies, it is better to always precommit to those strategies that will have higher payoff when considering all other strategies, then there's no trade to be made. Would be like trying to threaten a spider with a calm verbal sentence. If the other agent cannot precommit, there is no reason for the powerful agent to punish him for anything, he wouldn't be able to cooperate anyway, he wouldn't understand the game and, more importantly in my argument, he wouldn't be able to follow his precommitment, it would break down eventually, specially since the evidence for it is so abstract and complex. The powerful agent might want to simulate the minor agent suffering anyway, but it would solely amount to sadism. Acausal trades can only reach strong precommitable areas of the universe.

Moreover, an agent also needs reasonable epistemic access to the regions of logical space (certain areas of game theory, or, TDT if you will) that indicates both the possibility of acausal trades and some estimative on the type-distribution of superintelligences willing to trade with him (most likely, future ones that the agent can help create). Forever deterring the advance of knowledge on that area seems unfeasible, or - at best - complicated and undesirable for other reasons.

It is clear that we (humans) don't want to be in an enslavable position. I believe we are not. One of the things excluding us from this position is complete incapability to precommit. This is a psychological constrain, a neurochemical constrain. We do not have the ability of even having stable long term goals, strong precommitment is neurochemical impossible. However, it seems we can change this with human enhancement, we could develop drugs which could cure akrasia, we could overcome breakdown of will with some amazing psychological technique discovered by CFAR. It seems, however desirable on other grounds, getting rid of akrasia presents severe risks. Even if somehow we only slightly decrease akrasia, this would increase the probability that individuals with access to the relevant regions of logical space could precommit and become slaves. They might then proceed to cure akrasia for the rest of humanity.

Therefore, we should avoid trying to fundamentally fix akrasia for now, until we have a better understanding of those matters and perhaps solve the blackmail problem, or maybe only after FAI. My point here is merely arguing everyone should not endorse technologies (or psychological techniques) proposing to fundamentally fix a problem that would, otherwise, seems desirable of fixing. It would seem like a clear optimization process, but it could actually open the gates of acausal hell and damn humanity to eternal slavery.

 

(Thank cousin_it for the abstract. All mistakes are my responsibility.)

(EDIT: Added an explanation to back up the premise the acausal trade entails precommitment.)

Circular Preferences Don't Lead To Getting Money Pumped

-3 Mestroyer 11 September 2012 03:42AM

Edit: for reasons given in the comments, I don't think the question of what circular preferences actually do is well defined, so this an answer to a wrong question.

 

If I like Y more than X, at an exchange rate of 0.9Y for 1X, and I like Z more than Y, at an exchange rate of 0.9Z for 1Y, and I like X more than Z, at an exchange rate of 0.9X for 1Z, you might think that given 1X and the ability to trade X for Y at an exchange rate of 0.95Y for 1X, and Y for Z at an exchange rate of 0.95Z for 1Y, and Z for X at an exchange rate of 0.95X for 1Z, I would trade in a circle until I had nothing left.

But actually, if I knew that I had circular preferences, and I knew that if I had 0.95Y I would trade it for (0.95^2)Z, which I would trade for (0.95^3)X, then actually I'd be trading 1X for (0.95^3)X, which I'm obviously not going to do.

Similarly, if the exchange rates are all 1:1, but each trade costs 1 penny, and I care about 1 penny much much less than any of 1X, 1Y, or 1Z, and I trade my X for Y, I know I'm actually going to end up with X - 3 cents, so I won't make the trade.

Unless I can set a Schelling fence, in which case I will end up trading once.

So if instead of being given X, I have a 1/3 chance of each of X, Y, and Z, I would hope I wouldn't set a Schelling fence, because then my 1/3 chance of each thing becomes a 1/3 chance of each thing minus the trading penalty. So maybe I'd want to be bad at precommitments, or would I precommit not to precommit?

How can humans make precommitments?

6 Incorrect 15 September 2011 01:19AM

How can you precommit to something where the commitment is carried out only after you know your commitment strategy has failed?

It would seem to make it impossible to commit to blackmail when the action of blackmail has negative utility. How can you possibly convince your rational future self to carry out a commitment they know will not work?

You could attempt to adopt a strategy of always following your commitments. From your current perspective this is useful but once you have learned your strategy has failed, what's to prevent you from just disregarding the strategy?

If a commitment strategy will fail you don't want to make the commitment but if you will not follow the commitment even when the strategy fails then you never made the commitment in the first place.

For example, in nuclear war why would you ever retaliate? Once you know your strategy of nuclear deterrence has failed, shooting back will only cause more civilian casualties.

I'm not saying commitments aren't useful, I'm just not sure how you can make them. How do you prevent your future self from reasoning their way out of them?

I apologize if reading this makes it harder for any of you to make precommitments. I'm hoping someone has a better solution than simply tricking your future self.