Jokes Thread

25 JosephY 24 July 2014 12:31AM

This is a thread for rationality-related or LW-related jokes and humor. Please post jokes (new or old) in the comments.

------------------------------------

Q: Why are Chromebooks good Bayesians?

A: Because they frequently update!

------------------------------------

A super-intelligent AI walks out of a box...

------------------------------------

Q: Why did the psychopathic utilitarian push a fat man in front of a trolley?

A: Just for fun.

Recent updates to gwern.net (2013-2014)

26 gwern 08 July 2014 01:44AM

Previous: 2011, 2012-2013

“It cannot be gotten for gold, neither shall silver be weighed for the price thereof. / It cannot be valued with the gold of Ophir, with the precious onyx, nor the sapphire. / The gold and the crystal cannot equal it: and the exchange of it shall not be for vessels of fine gold. / No mention shall be made of coral, or of pearls: for the price of wisdom is above rubies.”

Another 477 days are past, so what have I been up to? In roughly topical & chronological order, here are some major additions to gwern.net:

Statistics:

QS:

Black-markets:

Bitcoin:

Tech:

Literature/fiction

Misc:

Site:

  • I began A/B testing my site design to try to improve readability:

    • no difference between 4 fonts
    • no difference between lineheights
    • no difference between the null hypothesis & the null hypothesis
    • a pure black/white foreground/background performed better than mixes of off-colors
    • font size 100-120%: default of 100% was best
    • blockquote formatting: Readability-style bad, zebra-stripes good
    • header capitalization: best result was to upcase title & all section headers
    • tested font size & number size & table of contents background: status quo of all was best
    • BeeLine Reader: no color variant performed better than no-highlighting
  • anonymous feedback analysis (feedback turned out to be useful)
  • deleted Flattr, trying out Gittip for donations; Gittip turns out to work much better
  • I began a newsletter/mailing-list; the back-issues are online:

[moderator action] Eugine_Nier is now banned for mass downvote harassment

107 Kaj_Sotala 03 July 2014 12:04PM

As previously discussed, on June 6th I received a message from jackk, a Trike Admin. He reported that the user Jiro had asked Trike to carry out an investigation to the retributive downvoting that Jiro had been subjected to. The investigation revealed that the user Eugine_Nier had downvoted over half of Jiro's comments, amounting to hundreds of downvotes.

I asked the community's guidance on dealing with the issue, and while the matter was being discussed, I also reviewed previous discussions about mass downvoting and looked for other people who mentioned being the victims of it. I asked Jack to compile reports on several other users who mentioned having been mass-downvoted, and it turned out that Eugine was also overwhelmingly the biggest downvoter of users David_Gerard, daenarys, falenas108, ialdabaoth, shminux, and Tenoke. As this discussion was going on, it turned out that user Ander had also been targeted by Eugine.

I sent two messages to Eugine, requesting an explanation. I received a response today. Eugine admitted his guilt, expressing the opinion that LW's karma system was failing to carry out its purpose of keeping out weak material and that he was engaged in a "weeding" of users who he did not think displayed sufficient rationality.

Needless to say, it is not the place of individual users to unilaterally decide that someone else should be "weeded" out of the community. The Less Wrong content deletion policy contains this clause:

Harrassment of individual users.

If we determine that you're e.g. following a particular user around and leaving insulting comments to them, we reserve the right to delete those comments. (This has happened extremely rarely.)

Although the wording does not explicitly mention downvoting, harassment by downvoting is still harassment. Several users have indicated that they have experienced considerable emotional anguish from the harassment, and have in some cases been discouraged from using Less Wrong at all. This is not a desirable state of affairs, to say the least.

I was originally given my moderator powers on a rather ad-hoc basis, with someone awarding mod privileges to the ten users with the highest karma at the time. The original purpose for that appointment was just to delete spam. Nonetheless, since retributive downvoting has been a clear problem for the community, I asked the community for guidance on dealing with the issue. The rough consensus of the responses seemed to authorize me to deal with the problem as I deemed appropriate.

The fact that Eugine remained quiet about his guilt until directly confronted with the evidence, despite several public discussions of the issue, is indicative of him realizing that he was breaking prevailing social norms. Eugine's actions have worsened the atmosphere of this site, and that atmosphere will remain troubled for as long as he is allowed to remain here.

Therefore, I now announce that Eugine_Nier is permanently banned from posting on LessWrong. This decision is final and will not be changed in response to possible follow-up objections.

Unfortunately, it looks like while a ban prevents posting, it does not actually block a user from casting votes. I have asked jackk to look into the matter and find a way to actually stop the downvoting. Jack indicated earlier on that it would be technically straightforward to apply a negative karma modifier to Eugine's account, and wiping out Eugine's karma balance would prevent him from casting future downvotes. Whatever the easiest solution is, it will be applied as soon as possible.

EDIT 24 July 2014: Banned users are now prohibited from voting.

Downvote stalkers: Driving members away from the LessWrong community?

39 Ander 02 July 2014 12:40AM

Last month I saw this post: http://lesswrong.com/lw/kbc/meta_the_decline_of_discussion_now_with_charts/ addressing whether the discussion on LessWrong was in decline.  As a relatively new user who had only just started to post comments, my reaction was: “I hope that LessWrong isn’t in decline, because the sequences are amazing, and I really like this community.  I should try to write a couple articles myself and post them!  Maybe I could do an analysis/summary of certain sequences posts, and discuss how they had helped me to change my mind”.   I started working on writing an article.

Then I logged into LessWrong and saw that my Karma value was roughly half of what it had been the day before.   Previously I hadn’t really cared much about Karma, aside from whatever micro-utilons of happiness it provided to see that the number slowly grew because people generally liked my comments.   Or at least, I thought I didn’t really care, until my lizard brain reflexes reacted to what it perceived as an assault on my person.

 

Had I posted something terrible and unpopular that had been massively downvoted during the several days since my previous login?  No, in fact my ‘past 30 days’ Karma was still positive.  Rather, it appeared that everything I had ever posted to LessWrong now had a -1 on it instead of a 0. Of course, my loss probably pales in comparison to that of other, more prolific posters who I have seen report this behavior.

So what controversial subject must I have commented on in order to trigger this assault?  Well, let’s see, in the past week  I had asked if anyone had any opinions of good software engineer interview questions I could ask a candidate.  I posted in http://lesswrong.com/lw/kex/happiness_and_children/ that I was happy to not have children, and finally, here in what appears to me to be by far the most promising candidate:http://lesswrong.com/r/discussion/lw/keu/separating_the_roles_of_theory_and_direct/  I replied to a comment about global warming data, stating that I routinely saw headlines about data supporting global warming. 

 

Here is our scenario: A new user is attempting to participate on a message board that values empiricism and rationality, posted that evidence supports that climate change is real.  (Wow, really rocking the boat here!)    Then, apparently in an effort to ‘win’ this discussion by silencing opposition, someone went and downvoted every comment this user had ever made on the site.   Apparently they would like to see LessWrong be a bastion of empiricism and rationality and [i]climate change denial[/i] instead? And the way to achieve this is not to have a fair and rational discussion of the existing empirical data, but rather to simply Karmassassinate anyone who would oppose them?

 

Here is my hypothesis: The continuing problem of karma downvote stalkers is contributing to the decline of discussion on the site.    I definitely feel much less motivated to try and contribute anything now, and I have been told by multiple other people at LessWrong meetings things such as “I used to post a lot on LessWrong, but then I posted X, and got mass downvoted, so now I only comment on Yvain’s blog”.  These anecdotes are, of course, only very weak evidence to support my claim.  I wish I could provide more, but I will have to defer to any readers who can supply more.

 

Perhaps this post will simply trigger more retribution, or maybe it will trigger an outswelling of support, or perhaps just be dismissed by people saying I should’ve posted it to the weekly discussion thread instead.   Whatever the outcome, rather than meekly leaving LessWrong and letting my 'stalker' win, I decided to open a discussion about the issue.  Thank you!

Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming

24 VipulNaik 25 June 2014 09:47PM

I recently asked two questions on Quora with similar question structures, and the similarities and differences between the responses were interesting.

Question #1: Anthropogenic global warming, the greenhouse effect, and the historical weather record

I asked the question here. Question statement:

If you believe in Anthropogenic Global Warming (AGW), to what extent is your belief informed by the theory of the greenhouse effect, and to what extent is it informed by the historical temperature record?

In response to some comments, I added the following question details:

Due to length limitations, the main question is a bit simplistically framed. But what I'm really asking for is the relative importance of theoretical mechanisms and direct empirical evidence. Theoretical mechanisms are of course also empirically validated, but the empirical validation could occur in different settings.

For instance, the greenhouse effect is a mechanism, and one may get estimates of the strength of the greenhouse effect based on an understanding of the underlying physics or by doing laboratory experiments or simulations.

Direct empirical evidence is evidence that is as close to the situation we are trying to predict as possible. In this case, it would involve looking at the historical records of temperature and carbon dioxide concentrations, and perhaps some other confounding variables whose role needs to be controlled for (such as solar activity).

Saying that your belief is largely grounded in direct empirical evidence is basically saying that just looking at the time series of temperature, carbon dioxide concentrations and the other variables can allow one to say with fairly high confidence (starting from very weak priors) that increased carbon dioxide concentrations, due to human activity, are responsible for temperature increases. In other words, if you ran a regression and tried to do the usual tricks to infer causality, carbon dioxide would come out as the culprit.

Saying that your belief is largely grounded in theory is basically saying that the science of the greenhouse effect is sufficiently convincing that the historical temperature and weather record isn't an important factor in influencing your belief: if it had come out differently, you'd probably just have thought the data was noisy or wrong and wouldn't update away from believing in the AGW thesis.

I also posted to Facebook here asking my friends about the pushback to my use of the term "belief" in my question.

Question #2: Effect of increase in the minimum wage on unemployment

I asked the question here. Question statement:

If you believe that raising the minimum wage is likely to increase unemployment, to what extent is your belief informed by the theory of supply and demand and to what extent is it informed by direct empirical evidence?

I added the following question details:

By "direct empirical evidence" I am referring to empirical evidence that  directly pertains to the relation between minimum wage raises and  employment level changes, not empirical evidence that supports the  theory of supply and demand in general (because transferring that to the  minimum wage context would require one to believe the transferability  of the theory).

Also, when I say "believe that raising the minimum wage is likely to increase unemployment" I am talking about minimum wage increases of the sort often considered in legislative measures, and by "likely" I just mean that it's something that should always be seriously considered whenever a proposal to raise the minimum wage is made. The belief would be consistent with believing that in some cases minimum wage raises have no employment effects.

I also posted the question to Facebook here.

Similarities between the questions

The questions are structurally similar, and belong to a general question type of considerable interest to the LessWrong audience. The common features to the questions:

  • In both cases, there is a theory (the greenhouse effect for Question #1, and supply and demand for Question #2) that is foundational to the domain and is supported through a wide range of lines of evidence.
  • In both cases, the quantitative specifics of the extent to which the theory applies in the particular context are not clear. There are prima facie plausible arguments that other factors may cancel out the effect and there are arguments for many different effect sizes.
  • In both cases, people who study the broad subject (climate scientists for Question #1, economists for Question #2) are more favorably disposed to the belief than people who do not study the broad subject.
  • In both cases, a significant part of the strength of belief of subject matter experts seems to be their belief in the theory. The data, while consistent with the theory, does not seem to paint a strong picture in isolation. For the minimum wage, consider the Card and Krueger study. Bryan Caplan discusses how Bayesian reasoning with strong theoretical priors can lead one to continue believing that minimum wage increases cause unemployment to rise, without addressing Card and Krueger at the object level. For the case of anthropogenic global warming, consider the draft by Kesten C. Green (addressing whether a warming-based forecast has higher forecast accuracy than a no-change forecast) or the paper AGW doesn't cointegrate by Beenstock, Reingewertz, and Paldor (addressing whether, looking at the data alone, we can get good evidence that carbon dioxide concentration increases are linked with temperature increases).
  • In both cases, outsiders to the domain, who nonetheless have expertise in other areas that one might expect gives them insight into the question, are often more skeptical of the belief. A number of weather forecasters, physicists, and forecasting experts are skeptical of long-range climate forecasting or confident assertions about anthropogenic global warming. A number of sociologists, lawyers, and politicians often are disparaging of the belief that minimum wage increases cause unemployment levels to rise. The criticism is similar: namely, that a basically correct theory is being overstretched or incorrectly applied to a situation that is too complex, is similar.
  • In both cases, the debate is somewhat politically charged, largely because one's beliefs here affect one's views of proposed legislation (climate change mitigation legislation and minimum wage increase legislation). The anthropogenic global warming belief is more commonly associated with environmentalists, social democrats, and progressives, and (in the United States) with Democrats, whereas opposition to it is more common among conservatives and libertarians. The minimum wage belief is more commonly associated with free market views and (in the United States) with conservatives and Republicans, and opposition to it is more common among progressives and social democrats.

Looking for help

I'm interested in thoughts from the people here on these questions:

  • Thoughts on the specifics of Question #1 and Question #2.
  • Other possible questions in the same reference class (where a belief arises from a mix of theory and data, and the theory plays a fairly big role in driving the belief, while the data on its own is very ambiguous).
  • Other similarities between Question #1 and Question #2.
  • Ways that Question #1 and Question #2 are disanalogous.
  • General thoughts on how this relates to Bayesian reasoning and other modes of belief formation based on a combination of theory and data.

 

R support group and the benefits of applied statistics

16 sixes_and_sevens 26 June 2014 02:11PM

Following the interest in this proposal a couple of weeks ago, I've set up a Google Group for the purpose of giving people a venue to discuss R, talk about their projects, seek advice, share resources, and provide a social motivator to hone their skills. Having done this, I'd now like to bullet-point a few reasons for learning applied statistical skills in general, and R in particular:

The General Case:

- Statistics seems to be a subject where it's easy to delude yourself into thinking you know a lot about it. This is visibly apparent on Less Wrong. Although there are many subject experts on here, there are also a lot of people making bold pronouncements about Bayesian inference who wouldn't recognise a beta distribution if it sat on them. Don't be that person! It's hard to fool yourself into thinking you know something when you have to practically apply it.

- Whenever you think "I wonder what kind of relationship exists between [x] and [y]", it's within your power to investigate this.

- Statistics has a rich conceptual vocabulary for reasoning about how observations generalise, and how useful those generalisations might be when making inferences about future observations. These are the sorts of skills we want to be practising as aspiring rationalists.

- Scientific literature becomes a lot more readable when you appreciate the methods behind them. You'll have a much greater understanding of scientific findings if you appreciate what the finding means in the context of statistical inference, rather than going off whatever paraphrased upshot is given in the abstract.

- Statistical techniques make use of fundamental mathematical methods in an applicable way. If you're learning linear algebra, for example, and you want an intuitive understanding of eigenvectors, you could do a lot worse than learning about principal component analysis.

R in particular:

- It's non-proprietary, (read "free"). Many competitive products are ridiculously expensive to license.

- Since it's common in academia, newer or more exotic statistical tools and procedures are more likely to have been implemented and made available in R than proprietary statistical packages or other software libraries.

- R skills are a strong signal of technical competence that will distinguish you from SPSS mouse-jockeys.

- There are many out-of-the-box packages for carrying out statistical procedures that you'd probably have to cobble together yourself if you were working in Python or Java.

- Having said that, popular languages such as Python and Java have libraries for interfacing with R.

- There's a discussion / support group for R with Less Wrong users in it. :-)

New organization - Future of Life Institute (FLI)

44 Vika 14 June 2014 11:00PM

As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.

Our idea was to create a hub on the US East Coast to bring together people who care about x-risk and the future of life. FLI is currently run entirely by volunteers, and is based on brainstorming meetings where the members come together and discuss active and potential projects. The attendees are a mix of local scientists, researchers and rationalists, which results in a diversity of skills and ideas. We also hold more narrowly focused meetings where smaller groups work on specific projects. We have projects in the pipeline ranging from improving Wikipedia resources related to x-risk, to bringing together AI researchers in order to develop safety guidelines and make the topic of AI safety more mainstream.

Max has assembled an impressive advisory board that includes Stuart Russell, George Church and Stephen Hawking. The advisory board is not just for prestige - the local members attend our meetings, and some others participate in our projects remotely. We consider ourselves a sister organization to FHI, CSER and MIRI, and touch base with them often.

We recently held our launch event, a panel discussion "The Future of Technology: Benefits and Risks" at MIT. The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn. The discussion covered a broad range of topics from the future of bioengineering and personal genetics, to autonomous weapons, AI ethics and the Singularity. A video and transcript are available.

FLI is a grassroots organization that thrives on contributions from awesome people like the LW community - here are some ways you can help:

  • If you have ideas for research or outreach we could be doing, or improvements to what we're already doing, please let us know (in the comments to this post, or by contacting me directly).
  • If you are in the vicinity of the Boston area and are interested in getting involved, you are especially encouraged to get in touch with us!
  • Support in the form of donations is much appreciated. (We are grateful for seed funding provided by Jaan Tallinn and Matt Wage.)
More details on the ideas behind FLI can be found in this article

Willpower Depletion vs Willpower Distraction

66 Academian 15 June 2014 06:29PM

I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea. (A propos, this is a great way to quickly gauge consensus in a field.) Basically, for a while some researchers believed that willpower depletion "is" glucose depletion in the prefrontal cortex, but some more recent experiments have failed to replicate this, e.g. by finding that the mere taste of sugar is enough to "replenish" willpower faster than the time it takes blood to move from the mouth to the brain:

Carbohydrate mouth-rinses activate dopaminergic pathways in the striatum–a region of the brain associated with responses to reward (Kringelbach, 2004)–whereas artificially-sweetened non-carbohydrate mouth-rinses do not (Chambers et al., 2009). Thus, the sensing of carbohydrates in the mouth appears to signal the possibility of reward (i.e., the future availability of additional energy), which could motivate rather than fuel physical effort.

-- Molden, D. C. et al, The Motivational versus Metabolic Effects of Carbohydrates on Self-Control. Psychological Science.

Stanford's Carol Dweck and Greg Walden even found that hinting to people that using willpower is energizing might actually make them less depletable:

When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.

-- Dweck and Walden, Willpower: It’s in Your Head? New York Times.

While these are all interesting empirical findings, there’s a very similar phenomenon that’s much less debated and which could explain many of these observations, but I think gets too little popular attention in these discussions:

Willpower is distractible.

Indeed, willpower and working memory are both strongly mediated by the dorsolateral prefontal cortex, so “distraction” could just be the two functions funging against one another. To use the terms of Stanovich popularized by Kahneman in Thinking: Fast and Slow, "System 2" can only override so many "System 1" defaults at any given moment.

So what’s going on when people say "willpower depletion"? I’m not sure, but even if willpower depletion is not a thing, the following distracting phenomena clearly are:

  • Thirst
  • Hunger
  • Sleepiness
  • Physical fatigue (like from running)
  • Physical discomfort (like from sitting)
  • That specific-other-thing you want to do
  • Anxiety about willpower depletion
  • Indignation at being asked for too much by bosses, partners, or experimenters...

... and "willpower depletion" might be nothing more than mental distraction by one of these processes. Perhaps it really is better to think of willpower as power (a rate) than energy (a resource).

If that’s true, then figuring out what processes might be distracting us might be much more useful than saying “I’m out of willpower” and giving up. Maybe try having a sip of water or a bit of food if your diet permits it. Maybe try reading lying down to see if you get nap-ish. Maybe set a timer to remind you to call that friend you keep thinking about.

The last two bullets,

  • Anxiety about willpower depletion
  • Indignation at being asked for too much by bosses, partners, or experimenters...

are also enough to explain why being told willpower depletion isn’t a thing might reduce the effects typically attributed to it: we might simply be less distracted by anxiety or indignation about doing “too much” willpower-intensive work in a short period of time.

Of course, any speculation about how human minds work in general is prone to the "typical mind fallacy". Maybe my willpower is depletable and yours isn’t. But then that wouldn’t explain why you can cause people to exhibit less willpower depletion by suggesting otherwise. But then again, most published research findings are false. But then again the research on the DLPFC and working memory seems relatively old and well established, and distraction is clearly a thing...

All in all, more of my chips are falling on the hypothesis that willpower “depletion” is often just willpower distraction, and that finding and addressing those distractions is probably a better a strategy than avoiding activities altogether in order to "conserve willpower".

On Terminal Goals and Virtue Ethics

67 Swimmer963 18 June 2014 04:00AM

Introduction

A few months ago, my friend said the following thing to me: “After seeing Divergent, I finally understand virtue ethics. The main character is a cross between Aristotle and you.”

That was an impossible-to-resist pitch, and I saw the movie. The thing that resonated most with me–also the thing that my friend thought I had in common with the main character–was the idea that you could make a particular decision, and set yourself down a particular course of action, in order to make yourself become a particular kind of person. Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be. Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’ 

(Tris did have a concept of some future world-outcomes being better than others, and wanting to have an effect on the world. But that wasn't the causal reason why she chose Dauntless; as far as I can tell, it was unrelated.)

My twelve-year-old self had a similar attitude. I read a lot of fiction, and stories had heroes, and I wanted to be like them–and that meant acquiring the right skills and the right traits. I knew I was terrible at reacting under pressure–that in the case of an earthquake or other natural disaster, I would freeze up and not be useful at all. Being good at reacting under pressure was an important trait for a hero to have. I could be sad that I didn’t have it, or I could decide to acquire it by doing the things that scared me over and over and over again. So that someday, when the world tried to throw bad things at my friends and family, I’d be ready.

You could call that an awfully passive way to look at things. It reveals a deep-seated belief that I’m not in control, that the world is big and complicated and beyond my ability to understand and predict, much less steer–that I am not the locus of control. But this way of thinking is an algorithm. It will almost always spit out an answer, when otherwise I might get stuck in the complexity and unpredictability of trying to make a particular outcome happen.


Virtue Ethics

I find the different houses of the HPMOR universe to be a very compelling metaphor. It’s not because they suggest actions to take; instead, they suggest virtues to focus on, so that when a particular situation comes up, you can act ‘in character.’ Courage and bravery for Gryffindor, for example. It also suggests the idea that different people can focus on different virtues–diversity is a useful thing to have in the world. (I'm probably mangling the concept of virtue ethics here, not having any background in philosophy, but it's the closest term for the thing I mean.)

I’ve thought a lot about the virtue of loyalty. In the past, loyalty has kept me with jobs and friends that, from an objective perspective, might not seem like the optimal things to spend my time on. But the costs of quitting and finding a new job, or cutting off friendships, wouldn’t just have been about direct consequences in the world, like needing to spend a bunch of time handing out resumes or having an unpleasant conversation. There would also be a shift within myself, a weakening in the drive towards loyalty. It wasn’t that I thought everyone ought to be extremely loyal–it’s a virtue with obvious downsides and failure modes. But it was a virtue that I wanted, partly because it seemed undervalued. 

By calling myself a ‘loyal person’, I can aim myself in a particular direction without having to understand all the subcomponents of the world. More importantly, I can make decisions even when I’m rushed, or tired, or under cognitive strain that makes it hard to calculate through all of the consequences of a particular action.

 

Terminal Goals

The Less Wrong/CFAR/rationalist community puts a lot of emphasis on a different way of trying to be a hero–where you start from a terminal goal, like “saving the world”, and break it into subgoals, and do whatever it takes to accomplish it. In the past I’ve thought of myself as being mostly consequentialist, in terms of morality, and this is a very consequentialist way to think about being a good person. And it doesn't feel like it would work. 

There are some bad reasons why it might feel wrong–i.e. that it feels arrogant to think you can accomplish something that big–but I think the main reason is that it feels fake. There is strong social pressure in the CFAR/Less Wrong community to claim that you have terminal goals, that you’re working towards something big. My System 2 understands terminal goals and consequentialism, as a thing that other people do–I could talk about my terminal goals, and get the points, and fit in, but I’d be lying about my thoughts. My model of my mind would be incorrect, and that would have consequences on, for example, whether my plans actually worked.

 

Practicing the art of rationality

Recently, Anna Salamon brought up a question with the other CFAR staff: “What is the thing that’s wrong with your own practice of the art of rationality?” The terminal goals thing was what I thought of immediately–namely, the conversations I've had over the past two years, where other rationalists have asked me "so what are your terminal goals/values?" and I've stammered something and then gone to hide in a corner and try to come up with some. 

In Alicorn’s Luminosity, Bella says about her thoughts that “they were liable to morph into versions of themselves that were more idealized, more consistent - and not what they were originally, and therefore false. Or they'd be forgotten altogether, which was even worse (those thoughts were mine, and I wanted them).”

I want to know true things about myself. I also want to impress my friends by having the traits that they think are cool, but not at the price of faking it–my brain screams that pretending to be something other than what you are isn’t virtuous. When my immediate response to someone asking me about my terminal goals is “but brains don’t work that way!” it may not be a true statement about all brains, but it’s a true statement about my brain. My motivational system is wired in a certain way. I could think it was broken; I could let my friends convince me that I needed to change, and try to shoehorn my brain into a different shape; or I could accept that it works, that I get things done and people find me useful to have around and this is how I am. For now. I'm not going to rule out future attempts to hack my brain, because Growth Mindset, and maybe some other reasons will convince me that it's important enough, but if I do it, it'll be on my terms. Other people are welcome to have their terminal goals and existential struggles. I’m okay the way I am–I have an algorithm to follow.

 

Why write this post?

It would be an awfully surprising coincidence if mine was the only brain that worked this way. I’m not a special snowflake. And other people who interact with the Less Wrong community might not deal with it the way I do. They might try to twist their brains into the ‘right’ shape, and break their motivational system. Or they might decide that rationality is stupid and walk away.

Identification of Force Multipliers for Success

17 Nick5a1 21 June 2014 05:15AM

For a while now I've been very interested in learning useful knowledge and acquiring useful skills. Of course there's no shortage of useful knowledge and skills to acquire, and so I've often thought about how best to spend my limited time learning.

When I came across the concept of Force Multiplication, it seemed like an appropriate metaphor for a strategy to apply to choosing where to invest my time and energy in acquiring useful skills and knowledge. I started to think about what areas or skills would make sense to learn about or acquire first, to:

  1. increase speed or ease of further learning/skill acquisition,
  2. help me achieve success not only in my current goals, but in later goals that I have not yet developed, and
  3. lead to interesting downstream options or other knowledge/skills to acquire.

There have been a small number of skills/areas that have helped me surge forward in progress towards my goals. I look back at these areas and wish only that I had come across them sooner. As most of my adult life has been focused on business, most of those areas that have had a tremendous impact on my progress have been business related, but not all.

So far I've found it hard to identify these areas in advance. Almost all of the skills or knowledge  that I learned, that had a large impact on progress towards success, I pursued for unrelated reasons, or I had no concept of how truly useful they would be. The only solution I currently have for identifying force multipliers is to ask other people, and especially those more accomplished than me, what they've learned that had the most impact on their progress towards success.

So, what have you learned that had the most impact on your progress towards success (whatever that might be)?

Can you think of any other ways to identify areas of force multiplication?

View more: Prev | Next