Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Why I'm working on satisficing intelligence augmentation (and not blogging about it much)

3 whpearson 05 February 2017 05:40PM
  • Paradigm shifting seems necessary for general intelligence.
  • It seems likely that to be able to perform paradigm shifts you need to change your internal language. This seems incompatible with having a fixed goal within a language.
  • So a maximising agi seem like incompatible requirements.
  • Having the system satisficing some abstract notion of good and bad seems compatible with paradigm shifiting and with humans
  • In order to push the system to improve its function you need some method  that can change what is good as the system that achieves that level of development.
  • It makes sense to make individual humans be what controls what is good and also have them teach the systems the meanings of words and how to behave.
  • In essence make the system be part of the not have their own verbalised goals apart from those given from the human (and the human would have taught the system the meaning of the words, so less chance of)
  • It also makes sense to have lots of them in case that something goes wrong with any individual.
  • Because of paradigm shifting  I do not expect any one augmented intelligence to dominate.

This means I've got to care about a whole bunch of other problems that the AI singleton people don't have to worry about.

I realize I should unpack all these, But blogging is not the answer. To know what sort of satisficing system will work in the real world with real people we need to experiment (and maybe paradigm shift ourselves a few times). Only then can we figure out how it will evolve over time,

Having a proof of concept will also focus people's attention, more than writing a bunch of words.

If you want to work with me, know someone who might want to, or to point out some flaws in my reasoning such that there is a simpler way forward within the kind of world I think it is, I am contactable at wil (one l) . my surname @gmail.com. But i think I  done with LW for now. Good luck with the revamp.

[Link] The humility argument for honesty

4 Benquo 05 February 2017 05:26PM

[Link] Kahneman's checklist to avoid cognitive biases and make better decisions

3 sleepingthinker 05 February 2017 11:13AM

[Link] Against willpower as a scientific or otherwise useful concept (Nautilus Magazine)

0 Kaj_Sotala 04 February 2017 10:11PM

Weekly LW Meetups

0 FrankAdamek 03 February 2017 04:53PM

Why is the surprisingly popular answer correct?

19 Stuart_Armstrong 03 February 2017 04:24PM

In Nature, there's been a recent publication arguing that the best way of gauging the truth of a question is to get people to report their views on the truth of the matter, and their estimate of the proportion of people who would agree with them.

Then, it's claimed, the surprisingly popular answer is likely to be the correct one.

In this post, I'll attempt to sketch a justification as to why this is the case, as far as I understand it.

First, an example of the system working well:

 

Capital City

Canberra is the capital of Australia, but many people think the actual capital is Sydney. Suppose only a minority knows that fact, and people are polled on the question:

Is Canberra the capital of Australia?

Then those who think that Sydney is the capital will think the question is trivially false, and will generally not see any reason why anyone would believe it true. They will answer "no" and put high proportion of people answering "no".

The minority who know the true capital of Australia will answer "yes". But most of them will likely know a lot of people who are mistaken, so they won't put a high proportion on people answering "yes". Even if they do, there are few of them, so the population estimate for the population estimate of "yes", will still be low.

Thus "yes", the correct answer, will be surprisingly popular.

A quick sanity check: if we asked instead "Is Alice Springs the capital of Australia?", then those who believe Sydney is the capital will still answer "no" and claim that most people would do the same. Those who believe the capital is in Canberra will answer similarly. And there will be no large cache of people believing in Alice Springs being the capital, so "yes" will not be surprisingly popular.

What is important here is that adding true information to the population, will tend to move the proportion of people believing in the truth, more than that moves people's estimate of that proportion.

 

No differential information:

Let's see how that setup could fail. First, it could fail in a trivial fashion: the Australian Parliament and the Queen secretly conspire to move the capital to Melbourne. As long as they aren't included in the sample, nobody knows about the change. In fact, nobody can distinguish a world in which that was vetoed from one where where it passed. So the proportion of people who know the truth - that being those few deluded souls who already though the capital was in Melbourne, for some reason - is no higher in the world where it's true than the one where it's false.

So the population opinion has to be truth-tracking, not in the sense that the majority opinion is correct, but in the sense that more people believe X is true, relatively, in a world where X is true versus a world where X is false.


Systematic bias in population proportion:

A second failure mode could happen when people are systematically biased in their estimate of the general opinion. Suppose, for instance, that the following headline went viral:

"Miss Australia mocked for claims she got a doctorate in the nation's capital, Canberra."

And suppose that those who believed the capital was in Sydney thought "stupid beauty contest winner, she thought the capital was in Canberra!". And suppose those know knew the true capital thought "stupid beauty contest winner, she claimed to have a doctorate!". So the actual proportion in the belief doesn't change much at all.

But then suppose everyone reasons "now, I'm smart, so I won't update on this headline, but some other people, who are idiots, will start to think the capital is in Canberra." Then they will update their estimate of the population proportion. And Canberra may no longer be surprisingly popular, just expectedly popular.

 

Purely subjective opinions

How would this method work on a purely subjective opinion, such as:

Is Picasso superior to Van Gogh?

Well, there are two ways of looking at this. The first is to claim this is a purely subjective opinion, and as such people's beliefs are not truth tracking, and so the answers don't give any information. Indeed, if everyone accepts that the question is purely subjective, then there is no such thing as private (or public) information that is relevant to this question at all. Even if there were a prior on this question, no-one can update on any information.

But now suppose that there is a judgement that is widely shared, that, I don't know, blue paintings are objectively superior to paintings that use less blue. Then suddenly answers to that question become informative again! Except now, the question that is really being answered is:

Does Picasso use more blue than Van Gogh?

Or, more generally:

According to widely shared aesthetic criteria, is Picasso superior to Van Gogh?

The same applies to moral questions like "is killing wrong?". In practice, that is likely to reduce to:

According to widely shared moral criteria, is killing wrong?

 

Planning 101: Debiasing and Research

12 lifelonglearner 03 February 2017 03:01PM

Planning 101: Techniques and Research

<Cross-posed from my blog>

[Epistemic status: Relatively strong. There are numerous studies showing that predictions often become miscalibrated. Overconfidence in itself appears fairly robust, appearing in different situations. The actual mechanism behind the planning fallacy is less certain, though there is evidence for the inside/outside view model. The debiasing techniques are supported, but more data on their effectiveness could be good.]

Humans are often quite overconfident, and perhaps for good reason. Back on the savanna and even some places today, bluffing can be an effective strategy for winning at life. Overconfidence can scare down enemies and avoid direct conflict.

When it comes to making plans, however, overconfidence can really screw us over. You can convince everyone (including yourself) that you’ll finish that report in three days, but it might still really take you a week. Overconfidence can’t intimidate advancing deadlines.

I’m talking, of course, about the planning fallacy, our tendency to make unrealistic predictions and plans that just don’t work out.

Being a true pessimist ain’t easy.

Students are a prime example of victims to the planning fallacy:

First, students were asked to predict when they were 99% sure they’d finish a project. When the researchers followed up with them later, though, only about 45%, less than half of the students, had actually finished by their own predicted times [Buehler, Griffin, Ross, 1995].

Even more striking, students working on their psychology honors theses were asked to predict when they’d finish, “assuming everything went as poor as it possibly could.” Yet, only about 30% of students finished by their own worst-case estimate [Buehler, Griffin, Ross, 1995].

Similar overconfidence was also found in Japanese and Canadian cultures, giving evidence that this is a human (and not US-culture-based) phenomenon. Students continued to make optimistic predictions, even when they knew the task had taken them longer last time [Buehler and Griffin, 2003, Buehler et al., 2003].

As I student myself, though, I don’t mean to just pick on ourselves.

The planning fallacy affects projects across all sectors.

An overview of public transportation projects found that most of them were, on average, 20–45% above the estimated cost. In fact, research has shown that these poor predictions haven’t improved at all in the past 30 years [Flyvbjerg 2006].

And there’s no shortage of anecdotes, from the Scottish Parliament Building, which cost 10 times more than expected, or the Denver International Airport, which took over a year longer and cost several billion more.

When it comes to planning, we suffer from a major disparity between our expectations and reality. This article outlines the research behind why we screw up our predictions and gives three suggested techniques to suck less at planning.

 

The Mechanism:

So what’s going on in our heads when we make these predictions for planning?

On one level, we just don’t expect things to go wrong. Studies have found that we’re biased towards not looking at pessimistic scenarios [Newby-Clark et al., 2000]. We often just assume the best-case scenario when making plans.

Part of the reason may also be due to a memory bias. It seems that we might underestimate how long things take us, even in our memory [Roy, Christenfeld, and McKenzie 2005].

But by far the dominant theory in the field is the idea of an inside view and an outside view [Kahneman and Lovallo 1993]. The inside view is the information you have about your specific project (inside your head). The outside view is what someone else looking at your project (outside of the situation) might say.

Obviously you want to take the Outside View.

 

We seem to use inside view thinking when we make plans, and this leads to our optimistic predictions. Instead of thinking about all the things that might go wrong, we’re focused on how we can help our project go right.

Still, it’s the outside view that can give us better predictions. And it turns out we don’t even need to do any heavy-lifting in statistics to get better predictions. Just asking other people (from the outside) to predict your own performance, or even just walking through your task from a third-person point of view can improve your predictions [Buehler et al., 2010].

Basically, the difference in our predictions seems to depend on whether we’re looking at the problem in our heads (a first-person view) or outside our heads (a third-person view). Whether we’re the “actor” or the “observer” in our minds seems to be a key factor in our planning [Pronin and Ross 2006].


Debiasing Techniques:

I’ll be covering three ways to improve predictions: MurphyjitsuReference Class Forecasting (RCF), and Back-planning. In actuality, they’re all pretty much the same thing; all three techniques focus, on some level, on trying to get more of an outside view. So feel free to choose the one you think works best for you (or do all three).

For each technique, I’ll give an overview and cover the steps first and then end with the research that supports it. They might seem deceptively obvious, but do try to keep in mind that obvious advice can still be helpful!

(Remembering to breathe, for example, is obvious, but you should still do it anyway. If you don't want to suffocate.)

 

Murphyjitsu:

“Avoid Obvious Failures”


Almost as good as giving procrastination an ass-kicking.

The name Murphyjitsu comes from the infamous Murphy’s Law: “Anything that can go wrong, will go wrong.” The technique itself is from the Center for Applied Rationality (CFAR), and is designed for “bulletproofing your strategies and plans”.

Here are the basic steps:

  1. Figure out your goal. This is the thing you want to make plans to do.
  2. Write down which specific things you need to get done to make the thing happen. (Make a list.)
  3. Now imagine it’s one week (or month) later, and yet you somehow didn’t manage to get started on your goal. (The visualization part here is important.) Are you surprised?
  4. Why? (What went wrong that got in your way?)
  5. Now imagine you take steps to remove the obstacle from Step 4.
  6. Return to Step 3. Are you still surprised that you’d fail? If so, your plan is probably good enough. (Don’t fool yourself!)
  7. If failure still seems likely, go through Steps 3–6 a few more times until you “problem proof” your plan.

Murphyjitsu based off a strategy called a “premortem” or “prospective hindsight”, which basically means imagining the project has already failed and “looking backwards” to see what went wrong [Klein 2007].

It turns out that putting ourselves in the future and looking back can help identify more risks, or see where things can go wrong. Prospective hindsight has been shown to increase our predictive power so we can make adjustments to our plans — before they fail [Mitchell et al., 1989, Veinott et al., 2010].

This seems to work well, even if we’re only using our intuitions. While that might seem a little weird at first (“aren’t our intuitions pretty arbitrary?”), research has shown that our intuitions can be a good source of information in situations where experience is helpful [Klein 1999; Kahneman 2011]*.

While a premortem is usually done on an organizational level, Murphyjitsu works for individuals. Still, it’s a useful way to “failure-proof” your plans before you start them that taps into the same internal mechanisms.

Here’s what Murphyjitsu looks like in action:

“First, let’s say I decide to exercise every day. That’ll be my goal (Step 1). But I should also be more specific than that, so it’s easier to tell what “exercising” means. So I decide that I want to go running on odd days for 30 minutes and do strength training on even days for 20 minutes. And I want to do them in the evenings (Step 2).

Now, let’s imagine that it’s now one week later, and I didn’t go exercising at all! What went wrong? (Step 3) The first thing that comes to mind is that I forgot to remind myself, and it just slipped out of my mind (Step 4). Well, what if I set some phone / email reminders? Is that good enough? (Step 5)

Once again, let’s imagine it’s one week later and I made a reminder. But let’s say I still didn’t got exercising. How surprising is this? (Back to Step 3) Hmm, I can see myself getting sore and/or putting other priorities before it…(Step 4). So maybe I’ll also set aside the same time every day, so I can’t easily weasel out (Step 5).

How do I feel now? (Back to Step 3) Well, if once again I imagine it’s one week later and I once again failed, I’d be pretty surprised. My plan has two levels of fail-safes and I do want to do exercise anyway. Looks like it’s good! (Done)


Reference Class Forecasting:

“Get Accurate Estimates”


Predicting the future…using the past!

Reference class forecasting (RCF)is all about using the outside view. Our inside views tend to be very optimistic: We will see all the ways that things can go right, but none of the ways things can go wrong. By looking at past history — other people who have tried the same or similar thing as us — we can get a better idea of how long things will really take.

Here are the basic steps:

  1. Figure out what you want to do.
  2. See your records how long it took you last time 3.
  3. That’s your new prediction.
  4. If you don’t have past information, look for about how long it takes, on average, to do our thing. (This usually looks like Googling “average time to do X”.)**
  5. That’s your new prediction!

Technically, the actual process for reference class forecasting works a little differently. It involves a statistical distribution and some additional calculations, but for most everyday purposes, the above algorithm should work well enough.

In both cases, we’re trying to take an outside view, which we know improves our estimates [Buehler et al., 1994].

When you Google the average time or look at your own data, you’re forming a “reference class”, a group of related actions that can give you info about how long similar projects tend to take. Hence, the name “reference class forecasting”.

Basically, RCF works by looking only at results. This means that we can avoid any potential biases that might have cropped up if we were to think it through. We’re shortcutting right to the data. The rest of it is basic statistics; most people are close to average. So if we have an idea of what the average looks like, we can be sure we’ll be pretty close to average as well [Flyvbjerg 2006; Flyvbjerg 2008].

The main difference in our above algorithm from the standard one is that this one focuses on your own experiences, so the estimate you get tends to be more accurate than an average we’d get from an entire population.

For example, if it usually takes me about 3 hours to finish homework (I use Toggl to track my time), then I’ll predict that it will take me 3 hours today, too.

It’s obvious that RCF is incredibly simple. It literally just tells you that how long something will take you this time will be very close to how long it took you last time. But that doesn’t mean it’s ineffective! Often, the past is a good benchmark of future performance, and it’s far better than any naive prediction your brain might spit out.

RCF + Murphyjitsu Example:

For me, I’ve found that using a mixture of Reference Class Forecasting and Murphyjitsu to be helpful for reducing overconfidence in my plans.

When starting projects, I will often ask myself, “What were the reasons that I failed last time?” I then make a list of the first three or four “failure-modes” that I can recall. I now make plans to preemptively avoid those past errors.

(This can also be helpful in reverse — asking yourself, “How did I solve a similar difficult problem last time?” when facing a hard problem.)

Here’s an example:

“Say I’m writing a long post (like this one) and I want to know how what might go wrong. I’ve done several of these sorts of primers before, so I have a “reference class” of data to draw from. So what were the major reasons I fell behind for those posts?

<Cue thinking>

Hmm, it looks like I would either forget about the project, get distracted, or lose motivation. Sometimes I’d want to do something else instead, or I wouldn’t be very focused.

Okay, great. Now what are some ways that I might be able to “patch” those problems?

Well, I can definitely start by making a priority list of my action items. So I know which things I want to finish first. I can also do short 5-minute planning sessions to make sure I’m actually writing. And I can do some more introspection to try and see what’s up with my motivation.

 

Back-planning:

“Calibrate Your Intuitions with Reality”

Back-planning involves, as you might expect, planning from the end. Instead of thinking about where we start and how to move forward, we imagine we’re already at our goal and go backwards.

Time-travelling inside your internal universe.

Here are the steps:

  1. Figure out the task you want to get done.
  2. Imagine you’re at the end of your task.
  3. Now move backwards, step-by-step. What is the step right before you finish?
  4. Repeat Step 3 until you get to where you are now.
  5. Write down how long you think the task will now take you.
  6. You now have a detailed plan as well as better prediction!

The experimental evidence for back-planning basically suggests that people will predict longer times to start and finish projects.

There are a few interesting hypotheses about why back-planning seems to improve predictions. The general gist of these theories is that back-planning is a weird, counterintuitive way to think about things, which means it disrupts a lot of mental processes that can lead to overconfidence [Wiese et al., 2012].

This means that back-planning can make it harder to fall into the groove of the easy “best-case” planning we default to. Instead, we need to actually look at where things might go wrong. Which is, of course, what we want.

In my own experience, I’ve found that going through a quick back-planning session can help my intuitions “warm up” to my prediction more. As in, I’ll get an estimation from RCF, but it still feels “off”. Walking through the plan through back-planning can help all the parts of me understand that it really will probably take longer.

Here’s the back-planning example:

“Right now, I want to host a talk at my school. I know that’s the end goal (Step 1). So the end goal is me actually finishing the talk and taking questions (Step 2). What happens right before that? (Step 3). Well, people would need to actually be in the room. And I would have needed a room.

Is that all? (Step 3). Also, for people to show up, I would have needed publicity. Probably also something on social media. I’d need to publicize at least a week in advance, or else it won’t be common knowledge.

And what about the actual talk? I would have needed slides, maybe memorize my talk. Also, I’d need to figure out what my talk is actually going to be on.

Huh, thinking it through like this, I’d need something like 3 weeks to get it done. One week for the actual slides, one week for publicity (at least), and one week for everything else that might go wrong.

That feels more ‘right’ than my initial estimate of ‘I can do this by next week.’”

 

Experimental Ideas:

Murphyjitsu, Reference Class Forecasting, and Back-planning are the three debiasing techniques that I’m fairly confident work well. This section is far more anecdotal. They’re ideas that I think are useful and interesting, but I don’t have much formal backing for them.

Decouple Predictions From Wishes:

In my own experience, I often find it hard to separate when I want to finish a task versus when I actually think I will finish a task. This is a simple distinction to keep in mind when making predictions, and I think it can help decrease optimism. The most important number, after all, is when I actually think I will finish—it’s what’ll most likely actually happen.

There’s some evidence suggesting that “wishful thinking” could actually be responsible for some poor estimates but it’s far from definitive [Buehler et al., 1997, Krizan and Windschitl].

Incentivize Correct Predictions:

Lately, I’ve been using a 4-column chart for my work. I write down the task in Column 1 and how long I think it will take me in Column 2. Then I go and do the task. After I’m done, I write down how long it actually took me in Column 3. Column 4 is the absolute value of Column 2 minus Column 3, or my “calibration score”.

The idea is to minimize my score every day. It’s simple and it’s helped me get a better sense for how long things really take.

Plan For Failure:

In my schedules, I specifically write in “distraction time”. If you aren’t doing this, you may want to consider doing this. Most of us (me included) have wandering attentions, and I know I’ll lost at least some time to silly things every day.

Double Your Estimate:

I get it. The three debiasing techniques I outlined above can sometimes take too long. In a pinch, you can probably approximate good predictions by just doubling your naive prediction.

Most people tend to be less than 2X overconfident, but I think (pessimistically) sticking to doubling is probably still better than something like 1.5X.

 

Working in Groups:

Obviously because groups are made of individuals, we’d expect them to be susceptible to the same overconfidence biases I covered earlier. Though some research has shown that groups are less susceptible to bias, more studies have shown that group predictions can be far more optimistic than individual predictions [Wright and Wells, Buehler et al., 2010]. “Groupthink” is term used to describe the observed failings of decision making in groups [Janis].

Groupthink (and hopefully also overconfidence), can be countered by either assigning a “Devil’s Advocate” or engaging in “dialectical inquiry” [Lunenburg 2012]:

We give out more than cookies over here

A Devil’s Advocate is a person who is actively trying to find fault with the group’s plans, looking for holes in reasoning or other objections. It’s suggested that the role rotates, and it’s associated with other positives like improved communication skills.

A dialectical inquiry is where multiple teams try to create the best plan, and then present them. Discussion then happens, and then the group selects the best parts of each plan . It’s a little like building something awesome out of lots of pieces, like a giant robot.

This is absolutely how dialectical inquiry works in practice.

For both strategies, research has shown that they lead to “higher-quality recommendations and assumptions” (compared to not doing them), although it can also reduce group satisfaction and acceptance of the final decision [Schweiger et al. 1986].

(Pretty obvious though; who’d want to keep chatting with someone hell-bent on poking holes in your plan?)

 

Conclusion:

If you’re interested in learning (even) more about the planning fallacy, I’d highly recommend the paper The Planning Fallacy: Cognitive, Motivational, and Social Origins by Roger Buehler, Dale Griffin, and Johanna Peetz. Most of the material in this guide here is was taken from their paper. Do go check it out! It’s free!

Remember that everyone is overconfident (you and me included!), and that failing to plan is the norm. There are scary unknown unknowns out there that we just don’t know about!

Good luck and happy planning!

 

Footnotes:

* Just don’t go and start buying lottery tickets with your gut. We’re talking about fairly “normal” things like catching a ball, where your intuitions give you accurate predictions about where the ball will land. (Instead of, say, calculating the actual projectile motion equation in your head.)

** In a pinch, you can just use your memory, but studies have shown that our memory tends to be biased too. So as often as possible, try to use actual measurements and numbers from past experience.


Works Cited:

Buehler, Roger, Dale Griffin, and Johanna Peetz. "The Planning Fallacy: Cognitive,

Motivational, and Social Origins." Advances in Experimental Social Psychology 43 (2010): 1-62. Social Science Research Network.

Buehler, Roger, Dale Griffin, and Michael Ross. "Exploring the Planning Fallacy: Why People

Underestimate their Task Completion Times." Journal of Personality and Social Psychology 67.3 (1994): 366.

Buehler, Roger, Dale Griffin, and Heather MacDonald. "The Role of Motivated Reasoning in

Optimistic Time Predictions." Personality and Social Psychology Bulletin 23.3 (1997): 238-247.

Buehler, Roger, Dale Griffin, and Michael Ross. “It’s About Time: Optimistic Predictions in

Work and Love.” European Review of Social Psychology Vol. 6, (1995): 1–32

Buehler, Roger, et al. "Perspectives on Prediction: Does Third-Person Imagery Improve Task

Completion Estimates?." Organizational Behavior and Human Decision Processes 117.1 (2012): 138-149.

Buehler, Roger, Dale Griffin, and Michael Ross. "Inside the Planning Fallacy: The Causes and

Consequences of Optimistic Time Predictions." Heuristics and Biases: The Psychology of Intuitive Judgment (2002): 250-270.

Buehler, R., & Griffin, D. (2003). Planning, Personality, and Prediction: The Role of Future

Focus in Optimistic Time Predictions. Organizational Behavior and Human Decision Processes, 92, 80–90

Flyvbjerg, Bent. "From Nobel Prize to Project Management: Getting Risks Right." Project

Management Journal 37.3 (2006): 5-15. Social Science Research Network.

Flyvbjerg, Bent. "Curbing Optimism Bias and Strategic Misrepresentation in Planning:

Reference Class Forecasting in Practice." European Planning Studies 16.1 (2008): 3-21.

Janis, Irving Lester. "Groupthink: Psychological Studies of Policy Decisions and Fiascoes."

(1982).

Johnson, Dominic DP, and James H. Fowler. "The Evolution of Overconfidence." Nature

477.7364 (2011): 317-320.

Kahneman, Daniel. Thinking, Fast and Slow. Macmillan, 2011.

Kahneman, Daniel, and Dan Lovallo. “Timid Choices and Bold Forecasts: A Cognitive

Perspective on Risk Taking." Management Science 39.1 (1993): 17-31.

Klein, Gary. Sources of power: How People Make DecisionsMIT press, 1999.

Klein, Gary. "Performing a Project Premortem." Harvard Business Review 85.9 (2007): 18-19.

Krizan, Zlatan, and Paul D. Windschitl. "Wishful Thinking About the Future: Does Desire

Impact Optimism?" Social and Personality Psychology Compass 3.3 (2009): 227-243.

Lunenburg, F. "Devil’s Advocacy and Dialectical Inquiry: Antidotes to Groupthink."

International Journal of Scholarly Academic Intellectual Diversity 14 (2012): 1-9.

Mitchell, Deborah J., J. Edward Russo, and Nancy Pennington. "Back to the Future: Temporal

Perspective in the Explanation of Events." Journal of Behavioral Decision Making 2.1 (1989): 25-38.

Newby-Clark, Ian R., et al. "People focus on Optimistic Scenarios and Disregard Pessimistic

Scenarios While Predicting Task Completion Times." Journal of Experimental Psychology: Applied 6.3 (2000): 171.

Pronin, Emily, and Lee Ross. "Temporal Differences in Trait Self-Ascription: When the Self is

Seen as an Other." Journal of Personality and Social Psychology 90.2 (2006): 197.

Roy, Michael M., Nicholas JS Christenfeld, and Craig RM McKenzie. "Underestimating the

Duration of Future Events: Memory Incorrectly Used or Memory Bias?." Psychological Bulletin 131.5 (2005): 738.

Schweiger, David M., William R. Sandberg, and James W. Ragan. "Group Approaches for

Improving Strategic Decision Making: A Comparative Analysis of Dialectical Inquiry,

Devil's Advocacy, and Consensus." Academy of Management Journal 29.1 (1986): 51-71.

Veinott, Beth. "Klein, and Sterling Wiggins,“Evaluating the Effectiveness of the Premortem

Technique on Plan Confidence,”." Proceedings of the 7th International ISCRAM Conference (May, 2010).

Wiese, Jessica, Roger Buehler, and Dale Griffin. "Backward Planning: Effects of Planning

Direction on Predictions of Task Completion Time." Judgment and Decision Making 11.2

(2016): 147.

Wright, Edward F., and Gary L. Wells. "Does Group Discussion Attenuate the Dispositional

Bias?." Journal of Applied Social Psychology 15.6 (1985): 531-546.

Civil resistance and the 3.5% rule

8 morganism 02 February 2017 06:53PM

Interesting, haven't seen anything data-driven like this before...

 

Civil resistance and the 3.5% rule.

https://rationalinsurgent.com/2013/11/04/my-talk-at-tedxboulder-civil-resistance-and-the-3-5-rule/

"no campaigns failed once they’d achieved the active and sustained participation of just 3.5% of the population—and lots of them succeeded with far less than that."

"Then I analyzed the data, and the results blew me away. From 1900 to 2006, nonviolent campaigns worldwide were twice as likely to succeed outright as violent insurgencies. And there’s more. This trend has been increasing over time—in the last fifty years civil resistance has become increasingly frequent and effective, whereas violent insurgencies have become increasingly rare and unsuccessful."

 

Data viz:

http://www.navcodata.org/

 

 

Interesting strategic viewpoint

http://politicalviolenceataglance.org/2016/11/15/how-can-we-know-when-popular-movements-are-winning-look-to-these-four-trends/

1. Size and diversity of participation.

2. Nonviolent discipline.

3. Flexible & innovative techniques. switching between concentrated methods like demonstrations and dispersed methods like strikes and stay-aways.

4. Loyalty shifts.
if erstwhile elite supporters begin to abandon the opponent, remain silent when they would typically defend him, and refuse to follow orders to repress dissidents, or drag their feet in carrying out day-to-day orders, the incumbent is losing his grip.

 

(observations from article above)

"The average nonviolent campaign takes about 3 years to run its course (that’s more than three times shorter than the average violent campaign, by the way)."

"The average nonviolent campaign is about eleven times larger as a proportion of the overall population as the average violent campaign.

"Nonviolent resistance campaigns are ten times more likely to usher in democratic institutions than violent ones."

 

 

 

original overview and links article:

https://www.theguardian.com/commentisfree/2017/feb/01/worried-american-democracy-study-activist-techniques

 

and a training site that has some exercises in group cohesion and communication tech, from Guardian.

https://www.trainingforchange.org/tools

 

edit: The article that got me looking, how to strike in a gig economy, and international reach

 

http://www.transnational-strike.info/2017/02/01/how-do-we-strike-when-our-boss-is-a-machine-a-software-or-a-chain-struggles-in-the-gig-economy/

Death - an essay

1 dglukhov 02 February 2017 05:25PM

This essay may not hold a position by the end. See the original meaning of writing essays if you're confused.

A cursory search for discussion articles on death, though not necessarily optimized to exploit the best results, yielded several results that I wasn't necessarily satisfied with. Particularly because nothing was definitive, nothing particularly convinced me one way or the other. Why?

Testimonials of how awful the death of a loved one was to a person doesn't satisfy me since I get emotional evidence, not necessarily empirical evidence. There were cultures that revered honorable deaths, I think of the Vikings that searched for the opportunity to die if it meant dying well, and I'm sure there were many other complex emotional testimonies one could have gleaned from such figures, and still might. Historical stories about the systematic killings of members of certain nationalities, religious groups and other affiliations strike me as the result of politics at its most grizzly, where death is the ultimate punishment. And yet I can't help but think of what a martyr must have been thinking in their minds as their doom drew to a close. Or what people do when death is an inevitability that they cannot control, and have to cope with the idea of dying. One might claim that there is an almost universal understanding of death, yet research suggests that the fear of death in children is a learned phenomenon, that understanding the dread of death is developmental milestone. (Note that these are not definitive sources on such subjects, and further discussion can improve or mitigate the effects of this potential evidence).

Some might find death a liberation from their lives of pain, whether they be attributed to individual circumstances or otherwise because they convinced themselves their life is hell, or for other reasons. I will occasional see a promoter for death, talking about lowering overpopulation, elder influence, and stagnation as a result of not having a timed lifespan to operate under. I'll see people arguing death is a meaningless concept, where time is an illusion, or where there are infinitely many copies of you existing in the multiverse, making immortality a moot concept and goal. Otherwise, some may claim that the fear of death is an evolutionary bi-product, where those individual organisms that feared death had better overall selection over those that did not(another cursory search of source on this subject yielded myriad soft paywalls, additional verification would be highly appreciated).

And of course, who can forget the group of people interested in cryonic preservation, in the hopes of being saved by a new technological age. I imagine in such a group, death should not even be an option. Either because it would do the most good, or because it is the ultimate solution for the more ego-centric utilitarians. One could argue here about the cost-effectiveness of such ambitions, and that brings up a whole other kettle of fish that could simply be left without debate if more basic assumptions about death are argued about instead.

Personally, I've had only one near-death experience (though mild compared to others). It involved nearly drowning in the ocean as I was getting pulled away by tide from shore. I don't think I've ever worked so hard in my entire life before, my life depended on me being able to swim back. Of course I understand the urge to live. Ironically enough, I've had suicidal thoughts as well, though attempts at such were not very creative, and ultimately scraped for fear of putting my family in a bind. I can't really say anything on the nature of my personal stance, other than the fact that I'd like to accomplish more things before I kick the bucket, if I ever want to kick the bucket.

I am aware that this is a broad topic, and I suppose I'd prefer the topic stayed fresh in the discussion realm. Consider this an act of curiosity, exploration. I'm not so eager to declare any stances on the subject, vast subjects rarely get my eager conclusions. I hope very much that I'm the only one, and that discussion will alleviate some of this apprehension.

Is Evidential Decision Theory presumptuous?

3 Tobias_Baumann 02 February 2017 01:41PM

I recently had a conversation with a staunch defender of EDT who maintained that EDT gives the right answer in the Smoker’s Lesion and even Evidential Blackmail. I came up with the following, even more counterintuitive, thought experiment:


--


By doing research, you've found out that there is either

(A) only one universe or

(B) a multiverse.

You also found out that the cosmological theory has a slight influence (via different physics) on how your brain works. If (A) holds, you will likely decide to give away all your money to random strangers on the street; if there is a multiverse, you will most likely not do that. Of course, causality flows in one direction only, i.e. your decision does not determine how many universes there are.

 

Suppose you have a very strong preference for (A) (e.g. because a multiverse would contain infinite suffering) so that it is more important to you than your money.

 

Do you give away all your money or not?

 

--


This is structurally equivalent to the Smoker's lesion, but what's causing your action is the cosmological theory, not a lesion or a gene. CDT, TDT, and UDT would not give away the money because there is no causal (or acausal) influence on the number of universes. EDT would reason that giving the money away is evidence for (A) and therefore choose to do so.


Apart from the usual “managing the news” point, this highlights another flaw in EDT: its presumptuousness. The EDT agent thinks that her decision spawns or destroys the entire multiverse, or at least reasons as if. In other words, EDT acts as if it affects astronomical stakes with a single thought.


I find this highly counterintuitive.

 

What makes it even worse is that this is not even a contrived thought experiment. Our brains are in fact shaped by physics, and it is plausible that different physical theories or constants both make an agent decide differently and make the world better or worse according to one’s values. So, EDT agents might actually reason in this way in the real world.

[Link] Why won't some people listen to reason?

2 Bound_up 02 February 2017 02:50AM

[Link] Yes, politics can make us stupid. But there’s an important exception to that rule.

3 Bound_up 02 February 2017 01:34AM

A question about the rules

4 phl43 01 February 2017 10:55PM

I'm new to Less Wrong and I have a question about the rules. I posted a link to the latest post on my blog, in which I argue in a polemical way against the claim that Trump's election caused a wave of hate crimes in the US. Someone complained about the tone of my post, which is fair enough (although I tend not to take very seriously criticism about tone that aren't accompanied by any substantive criticism), but I noticed that my link was taken down.

The same person also said that he or she thought LW tried to avoid politics, so I'm wondering if that's why the link was taken down. I don't really mind that my link was taken down, although I think part of the criticism was unfair (the person in question complained that I hadn't provided any evidence that people had made the claim I was attacking, which is true although it's only because I don't see how anyone could seriously deny it unless they have been living on another planet these past few months, but in any case I edited the post to address the criticism), but I would like to know what I'm permitted to post for future reference.

Like I said, I'm new here, so I apologize if I violated the rules and I'm not asking you to change them for me (obviously), but I would like to know what they are. (I didn't find anything that says we can't share links about politics, though it's true that when I browse past discussions, which I should probably have done in the first place, there doesn't seem to be any.) Is it forbidden to post anything that is related to politics, even if it makes a serious effort at evidence-based analysis, as I think it's fair to say my post does? I plan to post plenty of things on my blog that have nothing to do with politics, such as the post I just shared about moral relativism, but I just want to make sure I don't run afoul of the rules again.

[Link] What a Portuguese chronicler may teach us about moral relativism

1 phl43 01 February 2017 10:49PM

Humans as a truth channel

0 Stuart_Armstrong 01 February 2017 04:53PM

Crossposted at Intelligence Agents Forum.

Defining truth and accuracy is tricky, so when I've proposed designs for things like Oracles, I've either used a very specific and formal question, or and indirect criteria for truth.

Here I'll try and get a more direct system so that an AI will tell the human the truth about a question, so that the human understands.

continue reading »

Hacking humans

3 Stuart_Armstrong 01 February 2017 04:08PM

Crossposted at the Intelligent Agents Forum.

It should be noted that the colloquial "AI hacking a human" can mean three different things:

  1. The AI convinces/tricks/forces the human to do a specific action.
  2. The AI changes the values of the human to prefer certain outcomes.
  3. The AI completely overwhelms human independence, transforming them into a weak subagent of the AI.

Different levels of hacking make different systems vulnerable, and different levels of interaction make different types of hacking more or less likely.

Group Rationality Diary, February 2017

1 Viliam 01 February 2017 12:11PM

This is the public group rationality diary for February, 2017. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit

  • Obtained new evidence that made you change your mind about some belief

  • Decided to behave in a different way in some set of situations

  • Optimized some part of a common routine or cached behavior

  • Consciously changed your emotions or affect with respect to something

  • Consciously pursued new valuable information about something that could make a big difference in your life

  • Learned something new about your beliefs, behavior, or life that surprised you

  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

(Note: Seems like we didn't have Group Rationality Diary for a few months, so feel free to write about things that happened in the previous months, too.)

February 2017 Media Thread

4 ArisKatsaris 01 February 2017 08:31AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

[Link] The "I Already Get It" Slide

12 jsalvatier 01 February 2017 03:11AM

[Link] A great articulation of why people find it hard to adopt a naturalistic worldview

1 Stabilizer 31 January 2017 09:08PM

[Link] Putanumonit - A "statistical significance" story in finance

1 Jacobian 31 January 2017 05:54PM

[Link] "What Happens When Doctors Only Take Cash"? Everybody, Especially Patients, Wins

5 morganism 30 January 2017 11:57PM

[Link] A century of research following gifted kids.

3 morganism 30 January 2017 11:44PM

How often do you check this forum?

11 JenniferRM 30 January 2017 04:56PM

I'm interested from hearing from everyone who reads this.

Who is checking LW's Discussion area and how often?

1. When you check, how much voting or commenting do you do compared to reading?

2. Do bother clicking through to links?

3. Do you check using a desktop or a smart phone?  Do you just visit the website in browser or use an RSS something-or-other?

4. Also, do you know of other places that have more schellingness for the topics you think this place is centered on? (Or used to be centered on?) (Or should be centered on?)

I would ask this in the current open thread except that structurally it seems like it needs to be more prominent than that in order to do its job.

If you have very very little time to respond or even think about the questions, I'd appreciate it if you just respond with "Ping" rather than click away.

[Link] Prediction Calibration - Doing It Right

6 SquirrelInHell 30 January 2017 10:05AM

Open thread, Jan. 30 - Feb. 05, 2017

2 MrMind 30 January 2017 08:31AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Error and Terror: Are We Worrying about the Wrong Risks?

5 philosophytorres 30 January 2017 01:07AM

I would be happy to get feedback on this article, originally posted by the IEET:

 

When people worry about the dark side of emerging technologies, most think of terrorists or lone psychopaths with a death wish for humanity. Some future Ted Kaczynski might acquire a masters degree in microbiology, purchase some laboratory equipment intended for biohackers, and synthesize a pathogen that spreads quickly, is incurable, and kills 90 percent of those it infects.

Alternatively, Benjamin Wittes and Gabriella Blum imagine a scenario in which a business competitor releases “a drone attack spider, purchased from a bankrupt military contractor, to take you out. … Upon spotting you with its sensors, before you have time to weigh your options, the spider—if it is, indeed, an attack spider—shoots an infinitesimally thin needle … containing a lethal dose of a synthetically produced poison.” Once this occurs, the spider exits the house and promptly self-destructs, leaving no trace behind it.

This is a rather terrifying picture of the future that, however fantastical it may sound, is not implausible given current techno-developmental trends. The fact is that emerging technologies like synthetic biology and nanotechnology are becoming exponentially more powerful as well as more accessible to small groups and even single individuals. At the extreme, we could be headed toward a world in which a large portion of society, or perhaps everyone, has access to a “doomsday button” that could annihilate our species if pressed.

This is an unsettling thought given that there are hundreds of thousands of terrorists—according to one estimate—and roughly 4 percent of the population are sociopaths—meaning that there are approximately 296 million sociopaths in our midst today. The danger posed by such agents could become existential in the foreseeable future.

But what if deranged nutcases with nefarious intentions aren’t the most significant threat to humanity? An issue that rarely comes up in such conversations is the potentially greater danger posed by well-intentioned people with access to advanced technologies. In his erudite and alarming book Our Final Hour, Sir Martin Rees distinguishes between two types of agent-related risks: terror and error. The difference between these has nothing to do with the consequences—a catastrophe caused by error could be no less devastating than one caused by terror. Rather, what matters are the intentions behind the finger that pushes a doomsday button, causing spaceship Earth to explode.

There are reasons for thinking that error could actually constitute a greater threat than terror. First, let’s assume that science and technology become democratized such that most people on the planet have access to a doomsday button of some sort. Let’s say that the global population at this time is 10 billion people.

Second, note that the number off individuals who could pose an error threat will vastly exceed the number of individuals who would pose a terror threat. (In other words, the former is a superset of the latter.) On the one hand, every terrorist hell-bent on destroying the world could end up pushing the doomsday button by accident. Perhaps while attempting to create a designer pathogen that kills everyone not vaccinated against it, a terrorist inadvertently creates a virus that escapes the laboratory and is 100 percent lethal. The result is a global pandemic that snuffs out the human species.

On the other hand, any good-intentioned hobbyist with a biohacking laboratory could also accidentally create a new kind of lethal germ. History reveals numerous leaks from highly regulated laboratories—the 2009 swine flu epidemic that killed 12,000 between 2009 and 2010 was likely caused by a laboratory mistake in the late 1970s—so it’s not implausible to imagine someone in a largely unregulated environment mistakenly releasing a pathogenic bug.

In a world where nearly everyone has access to a doomsday button, exactly how long could it last? We can, in fact, quantify the danger here. Let’s begin by imagining a world in which all 10 billion people have (for the sake of argument) a doomsday button on their smartphone. This button could be pushed at any moment if one opens up the Doomsday App. Further imagine that of the 10 billion people who live in this world, not a single one has any desire to destroy it. Everyone wants the world to continue and humanity to flourish.

Now, how likely is this world to survive the century if each individual has a tiny chance of pressing the button? Crunching a few numbers, it turns out that doom would be all-but-guaranteed if each person had a negligible 0.000001 percent chance of error. The reason is that even though the likelihood of any one person causing total annihilation on accident is incredibly small, this probability adds up across the population. With 10 billion people, one should expect an existential catastrophe even if everyone is very, very, very careful not to press the button.

Consider an alternative scenario: imagine a world of 10 billion morally good people in which only 500 have the Doomsday App on their smartphone. This constitutes a mere 0.000005 percent of the total population. Imagine further that each of these individuals has an incredibly small 1 percent chance of pushing the button each decade. How long should civilization as a whole, with its 10 billion denizens, expect to survive? Crunching a few numbers again reveals that the probability of annihilation in the next 10 years would be a whopping 99 percent—that is, more or less certain.

The staggering danger of this situation stems from the two trends mentioned above: the growing power and accessibility of technology. A world in which fanatics want to blow everything up would be extremely dangerous if “weapons of total destruction” were to become widespread. But even if future people are perfectly compassionate—perhaps because of moral bioenhancements or what Steven Pinker calls the “moral Flynn effect”—the fact of human fallibility will make survival for centuries or decades highly uncertain. As Rees puts this point:

If there were millions of independent fingers on the button of a Doomsday machine, then one person’s act of irrationality, or even one person’s error, could do us all in. … Disastrous accidents (for instance, the unintended creation or release of a noxious fast-spreading pathogen, or a devastating software error) are possible even in well-regulated institutions. As the threats become graver, and the possible perpetrators more numerous, disruption may become so pervasive that society corrodes and regresses. There is a longer-term risk even to humanity itself.

As scholars have noted, “an elementary consequence of probability theory [is] that even very improbable outcomes are very likely to happen, if we wait long enough.” The exact same goes for improbable events that could be caused by a sufficiently large number of individuals—not across time, but across space.

Could this situation be avoided? Maybe. For example, perhaps engineers could design future technologies with safety mechanisms that prevent accidents from causing widespread harm—although this may turn out to be more difficult than it seems. Or, as Ray Kurzweil suggests, we could build a high-tech nano-immune system to detect and destroy self-replicating nanobots released into the biosphere (a doomsday scenario known as “grey goo”).

Another possibility advocated by Ingmar Persson and Julian Savulescu entails making society just a little less “liberal” by trading personal privacy for global security. While many people may, at first glance, be resistant to this proposal—after all, privacy seems like a moral right of all humans—if the alternative is annihilation than this trade-off might be worth the sacrifice. Or perhaps we could adopt the notion of sousveillance, whereby citizens themselves monitor society the use of wearable cameras and other apparatuses. In other words, the surveillees (those being watched) could use advanced technologies to surveil the surveillers (those doing the watching)—a kind of “inverse panopticon” to protect people from the misuse and abuse of state power.

While terror gets the majority of attention from scholars and the media, we should all be thinking more about the existential dangers inherent in the society-wide distribution of offensive capabilities involving advanced technologies. There’s a frighteningly good chance that future civilization will be more susceptible to error than terror.

(Parts of this are excerpted from my forthcoming book Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks.)

[Link] Why election models didn't predict Trump's victory — A primer on how polls and election models work

0 phl43 28 January 2017 07:51PM

AI Safety reading group

8 SoerenE 28 January 2017 12:07PM

I am hosting a weekly AI Safety reading group, and perhaps someone here would be interested in joining.

Here is what the reading group has covered so far:
http://airca.dk/reading_group.htm

Next week, on Wednesday the 1st of February 19:45 UTC, we will discuss "How Feasible is the Rapid Development of Artificial Superintelligence?" by Kaj Sotala. I publish some slides before each meeting, and present the article, so you can also join if you have have not read the article. 

To join, add me on Skype ("soeren.elverlin"). General coordination happens on a Facebook group, at 

You can see the time in your local timezone here:

Facets of Problems v0.1

3 whpearson 28 January 2017 12:00PM

Problems can be decomposed into parts that are shared among different problems, these parts I'll call facets (for want of a better word, if there is some art I am ignorant of let me know). Each facet fundamentally effects how you approach a problem, by changing the class of problem being solved. Facets can be seen as parts of paradigms extended into everyday life.

For example, when trying to path find, given only the map, you may use something like the A* algorithm. But if you have a map and an oracle to tell you that the optimal path runs through certain points, you can use that information decompose the problem into solving the shortest path between those points. Having that oracle is a facet to a problem. Another facet might be that you know of an automated doorway on a shortcut that is open and closed at different times. You no longer have a fixed map so the A* algorithm is not appropriate.  You'll have to represent that probabilistically or try and figure out the pattern of the opening, so you can predict exactly when it is open.

There are a number of distinct ways that facets can impact your problem solving . These can be because :

  • a facet suggests new sources of information to solve a problem (Epistemic)
  • a facet constrains the problem in a hard to discover way that make it easier to solve (Constraints)
  • a facet makes things harder to solve but more likely a good solution will be found if the facet is true (Inconveniences)
  • a facet means you have to manipulate your other facets (Meta)

A problem can have many facets and they interact in a non-trivial fashion. Having the wrong facets can be very bad they form the systems inductive bias.

I think facets can impact different things:

  1. How we approach the world ourselves (Are we making use of all the facets that we can? How do the facets we are exploiting interfere? Do we have damaging facets?).
  2. How we design systems that interact with the world. Enumerating the facets of a problem is the first step to trying to solve it.

Epistemic status: Someone has probably thought of this stuff before. Hoping to find it. If they haven't and people find it useful I'll do a second version.

continue reading »

[Link] Decision Theory and the Irrelevance of Impossible Outcomes

2 wallowinmaya 28 January 2017 10:16AM

Emergency learning

9 Stuart_Armstrong 28 January 2017 10:05AM

Crossposted at the Intelligent Agent Foundation Forum.

Suppose that we knew that superintelligent AI was to be developed within six months, what would I do?

Well, drinking coffee by the barrel at Miri's emergency research retreat I'd...... still probably spend a month looking at things from the meta level, and clarifying old ideas. But, assuming that didn't reveal any new approaches, I'd try and get something like this working.

continue reading »

[Link] Split Brain Does Not Lead to Split Consciousness

6 ChristianKl 28 January 2017 08:58AM

[Link] Performance Trends in AI

9 sarahconstantin 28 January 2017 08:36AM

[Link] Nothing is Forbidden, but Some Things are Good

0 gworley 27 January 2017 11:53PM

[Link] Did slavery make the US an economic superpower and would the industrial revolution have happened without it?

4 phl43 27 January 2017 09:01PM

[Link] The Maze of Moral Relativism

2 Stabilizer 27 January 2017 07:29PM

New LW Meetup: Bogota

1 FrankAdamek 27 January 2017 06:55PM

This summary was posted to LW Main on January 27th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The following meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Ann Arbor, Austin, Baltimore, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, Netherlands, New Hampshire, New York, Philadelphia, Prague, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

2017: An Actual Plan to Actually Improve

17 helldalgo 27 January 2017 06:42PM

[Epistemic status: mostly confident, but being this intentional is experimental]

This year, I'm focusing on two traits: resilience and conscientiousness.  I think these (or the fact that I lack them) are my biggest barriers to success.  Also: identifying them as goals for 2017 doesn't mean I'll stop developing them in 2018.  A year is just a nice, established amount of time in which progress can actually be made.  This plan is a more intentional version of techniques I've used to improve myself over the last few years.  I have outside verification that I'm more responsible, high-functioning, and resilient than I was several years ago.  I have managed to reduce my SSRI dose, and I have finished more important tasks this year than last year.  

Inspiring blog posts and articles can only do so much for personal development.  The most valuable writing in that genre tends to outline actual steps that (the author believes) generate positive results.  Unfortunately, finding those steps is a fairly personal process.  The song that gives me twenty minutes of motivation and the drug that helps me overcome anxiety might do the opposite for you.  Even though I'm including detailed steps in this plan, you should keep that in mind.  I hope that this post can give you a template for troubleshooting and discovering your own bottlenecks.

I.  

First, I want to talk about my criteria for success.  Without illustrating the end result, or figuring out how to measure it, I could finish out the year with a false belief that I'd made progress.  If you plan something without success criteria, you run the same risk. I also believe that most of the criteria should be observable by a third party, i.e. hard to fake. 

  1. I respond to disruptions in my plans with distress and anger.  While I've gotten better at calming down, the distress still happens. I would like to have emotional control such that I observe first, and then feel my feelings.  Disruptions should incite curiosity, and a calm evaluation of whether to correct course.  The observable bit is whether or not my husband and friends report that I seem less upset when they disrupt me.  This process is already taking place; I've been practicing this skill for a long time and I expect to continue seeing progress.  (resilience)
  2. If an important task takes very little time, doesn't require a lot of effort, and doesn't disrupt a more important process, I will do it immediately. The observable part is simple, here: are the dishes getting done? Did the trash go out on Wednesday?  (conscientiousness)
  3. I will do (2) without "taking damage."  I will use visualization of the end result to make my initial discomfort less significant.  (resilience) 
  4. I will use various things like audiobooks, music, and playfulness to make what can be made pleasant, pleasant.  (resilience and conscientiousness)
  5. My instinct when encountering hard problems will be to dissolve them into smaller pieces and identify the success criteria, immediately, before I start trying to generate solutions. I can verify that I'm doing this by doing hard problems in front of people, and occasionally asking them to describe my process as it appears.  
  6. I will focus on the satisfaction of doing hard things, and practice sitting in discomfort regularly (cold tolerance, calming myself around angry people, the pursuit of fitness, meditation).  It's hard to identify an external sign that this is accomplished.  I expect aversion-to-starting to become less common, and my spouse can probably identify that.  (conscientiousness)
  7. I will keep a daily journal of what I've accomplished, and carry a notebook to make reflective writing easy and convenient.  This will help keep me honest about my past self.  (conscientiousness) 
  8. By the end of the year, I will find myself and my close friends/family satisfied with my growth.  I will have a record of finishing several important tasks, will be more physically fit than I am now, and will look forward to learning difficult things.
One benefit of the some of these is that practice and success are the same.  I can experience the satisfaction of any piece of my practice done well; it will count as being partly successful.  

II.

I've taken the last few years to identify these known bottlenecks and reinforcing actions.  Doing one tends to make another easier, and neglecting them keeps harder things unattainable.  These are the most important habits to establish early.  

  1. Meditation for 10 minutes a day directly improves my resilience and lowers my anxiety.
  2. Medication shouldn't be skipped (an SSRI, DHEA, and methylphenidate). If I decide to go off of it, I should properly taper rather than quitting cold turkey.  DHEA counteracts the negatives of my hormonal birth control and (seems to!) make me more positively aggressive and confident.
  3. Fitness (in the form of dance, martial arts, and lifting) keeps my back from hurting, gives me satisfaction, and has a number of associated cognitive benefits.  Dancing and martial arts also function as socialization, in a way that leads to group intimacy faster than most of my other hobbies.  Being fit and attractive helps me maintain a high libido.  
  4. I need between 7 and 9 hours of sleep.  I've tried getting around it.  I can't.  Getting enough sleep is a well-documented process, so I'm not going to outline my process here.
  5. Water.  Obviously.
  6. Since overcoming most of my social anxiety, I've discovered that frequent, high-value socialization is critical to avoid depression.  I try to regularly engage in activities that bootstrap intimacy, like the dressing room before performances, solving a hard problem with someone, and going to conventions.  I need several days a week to include long conversations with people I like.  
Unknown bottlenecks can be identified by identifying a negative result, and tracing the chain of events backwards until you find a common denominator.  Sometimes, these can also be identified by people who interact with you a lot.

III.  

My personal "toolkit" is a list of things that give me temporary motivation or rapidly deescalate negative emotions.  

  1. Kratom (<7g) does wonders for my anxieties about starting a task.  I try not to take it too often, since I don't want to develop tolerance, but I like to keep some on hand for this.
  2. Nicotine+caffeine/ltheanine capsules gives me an hour of motivation without jitters.  This also has a rapid tolerance so I don't do it often.
  3. A 30-second mindfulness meditation can usually calm my first emotional response to a distressing event.
  4. Various posts on mindingourway.com can help reconnect me to my values when I'm feeling particularly demotivated.  
  5. Reorganizing furniture makes me feel less "stuck" when I get restless.  Ditto for doing a difficult thing in a different place.
  6. Google Calendar, a number of notebooks, and a whiteboard keep me from forgetting important tasks.
  7. Josh Waitzkin's book, The Art of Learning, remotivates me to achieve mastery in various hobbies.
  8. External prompting from other people can make me start a task I've been avoiding. Sometimes I have people aggressively yell at me.
  9. The LW study hall (Complice.co) helps keep me focused. I also do "pomos" over video with other people who don't like Complice.
IV.

This outline is the culmination of a few years of troubleshooting, getting feedback, and looking for invented narratives or dishonesty in my approach.  Personal development doesn't happen quickly for me, and I expect it doesn't for most people.  You should expect significant improvements to be a matter of years, not months, unless you're improving the basics like sleep or fitness.  For those, you see massive initial gains that eventually level off.  

If you have any criticisms or see any red flags in my approach, let me know in the comments.

 

[Link] new study finds performance enhancing drugs for chess

4 morganism 27 January 2017 12:04AM

View more: Prev | Next