Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LW Migration Announcement

2 Vaniver 22 March 2018 02:17AM

The votes are in, and of the eligible 376 voters, 102 voted to Migrate and 15 voted to archive. With 87% in favor, we’re going ahead with the transition, which will begin tomorrow (3/22) at 6pm Pacific time. Trike will take a snapshot of the database, transfer it to us, and we’ll begin the database import of new material since the last import. At 7pm, the DNS server will point to our server instead of Trike’s, and so people visiting lesswrong.com will see the new site. The import will run in parallel with the site operating, so there won’t be any downtime of the new site.


Some things to note:

  1. Old links will continue to work, redirecting to the right new URL. We’ll be watching the analytics to notice any failures, and you can also alert us through Intercom (which we’ll shut off a few days into the transition, as problems get fixed).
  2. Lesserwrong links will redirect to the same page at lesswrong.
  3. We’ll add old karma scores to new karma scores, and may adjust the vote weight algorithm accordingly.
  4. Please report issues here.


Also, for those of you near Berkeley, we’ll be throwing a launch party on April 7th at 7pm (LW, Facebook).

Leaving beta: Voting on moving to LessWrong.com

6 Vaniver 11 March 2018 11:40PM

It took longer than we hoped, but LessWrong 2.0 is finally ready to come out of beta. As discussed in the original announcement, we’re going to have a vote on whether or not to migrate the new site to the lesswrong.com URL. The vote will be open to people who had 1,000 or more LW karma at the time we announced the vote back in September, and they’ll receive a link by email or private message on the current LessWrong.com. If you had above 1000 karma in September and did not receive an email or PM, send an email to habryka@lesserwrong.com and we will send you the form link.

We take rationalist virtues seriously, and I think it’s important that the community actually be able to look at the new implementation and vision and be able to say “no thanks.” If over half of the votes are to not migrate, the migration will not happen and we’ll figure out how we want to move forward with the website we’ve built.

Unfortunately, the alternative option for what will happen with the lesswrong.com URL is not great. Before I got involved, the original dominant plan was to replace it with a static HTML site, which would require minimal maintenance while preserving the value of old Sequences articles. So in the absence of another team putting forward heroic efforts and coordinating with Trike, MIRI, etc. that would be the world we would be moving towards.

Why not just keep things as they are? At the time, it was the consensus among old regulars that LW felt like an abandoned ghost town. A major concern about keeping it alive for the people still using it was that newcomers would read Sequences articles linked from elsewhere, check out the recent discussion and find it disappointing, and then bounce off of LW. This reduced its value for bringing people into the community.

More recently, various security concerns have made it a worse option to just keep old websites running – Trike has run into some issues where updating the server and antiquated codebase to handle security patches proved difficult, and they would prefer to no longer be responsible for maintaining the old website.

In case you’re just tuning in now, some basic details: I’ve been posting on LW for a long time, and about two years ago thought I was the person who cared most about making sure LW stayed alive, so decided to put effort into making sure that happened. But while I have some skills as a writer and a programmer, I’m not a webdev and not great at project management, and so things have been rather slow. My current role is mostly in being something like the ‘senior rationalist’ on the team, and supporting the team with my models of what should happen and why. The actual work is being done by a combination of Oliver Habryka, Raymond Arnold, and Ben Pace, and their contributions are why we finally have a site that’s ready to come out of beta.

You can read more about our vision for the new LessWrong here.

March 2018 Media Thread

1 ArisKatsaris 02 March 2018 01:06AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.


  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

An alternative way to browse LessWrong 2.0

7 saturn 19 February 2018 02:10AM

This is something I’ve been tinkering with for a while, but I think it’s now complete enough to be generally useful. It’s an alternative frontend for LessWrong 2.0, using the GraphQL API.



  • Fast, even on low-end computers and phones

  • Quickly jump to new comments in a thread with the “.” and “,” keys

  • Archive view makes it easy to browse the best posts of years past

  • Always shows every comment in a thread, no need to “load more”

  • Log in and post using your existing username and password, or create a new account

  • Simple markdown editor

  • Typography enhancements

  • Switch between fixed-width and fluid layouts and several different themes

  • Easily view a comment’s ancestors without scrolling by hovering over the left edge of a comment tree

Thanks to Said Achmiz for designing the themes and writing much of the frontend JavaScript.

Give it a try: https://​www.greaterwrong.com

Popular religions suggest extrapolated volition is non-existence and wireheading

6 denisbider 09 February 2018 12:06AM

I'm not sure if this is insightful enough to share here, but I'll try anyway.

A fair amount of wondering has been done about how FAI could figure out what humans actually want. A school of thought says persuasively that what we say we want is not what we actually want, so what we really want has to be extrapolated.

If we take the end-game promises of popular religions at face value, it occurs to me that Buddhism promises something between non-existence and wireheading (nirvana - "to blow out"), while Christianity promises wireheading (eternal bliss - heaven). I am not familiar enough with other religions to make statements about them.

In my anecdotal experience, it seems to me rationalists are quick to dismiss wireheading and non-existence as desirable possibilities. We experience this grasping desire to live, create, discover, and experience. We're not sure to what end, but we feel this indescribable zest, and we're convinced it's going to be great.

Look at people getting mental orgasms from Elon Musk launching a car into space. Whatever problems you have, space exploration is not going to solve them, and if your life is in harmony, you don't need space exploration. And yet there's this palpable zest about an unspoken implication... That perhaps an age of discovery is dawning, an age of adventure, an age of transcending our present problems and tackling larger ones. An age of being awesome.

There's a type of person that feels this zest, and this type is not a majority. The median person on Earth is confused by the world. They believe in things like Jesus Christ, and they press on in hope that adhering to divine guidance while they attempt to survive the trials and tribulations of life will be rewarded with not having to do this again. To such a person, the sight of two metal meteors descending from the sky with loud sonic booms, igniting engines and landing in synchrony does not necessarily inspire awe or enthusiasm as much as confusion and terror.

We, the few, are the seekers of something we cannot describe, and most of us find it hard to identify with the mindset of the median individual. The median individual does not want the excitement we seek, they just want an end... a release.

February 2018 Media Thread

1 ArisKatsaris 01 February 2018 01:26PM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.


  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

Open thread, January 29 - ∞

6 LessWrong 28 January 2018 10:44PM

Might as well go with a bang.

European Community Weekend 2018 Announcement

4 DreamFlasher 25 January 2018 01:49PM

We are excited to announce this year's European LessWrong Community Weekend. For the fifth time, rationalists from all over Europe (and some from outside Europe) are gathering in Berlin to socialize, have fun, exchange knowledge and skills, and have interesting discussions.

The event takes place September 7th to September 9th and, like last year, it will be held in the beautiful Jugendherberge Wannsee which contains a large room for central events, several seminar rooms, and lots of comfortable spaces inside and out to socialize or relax.

This is a community-driven event. That means that while there will be a keynote and pre-planned content, the bulk of the schedule will be filled by the participants. There will be space to give talks, short or long, provide workshops, or just gather some people to do an activity together. In previous years we had the talks, lightning talks and workshops you would expect, as well as lighter activities such as morning-workouts, meditation sessions, authentic relating games, swimming in the lake and many more. Of course, there will also be time to reconnect with friends and form new connections with other aspiring rationalists.

Some valuable information

Most of the talks and discussions will be held in English, so you do not need to be able to speak German to attend.

The ticket price of €150 includes accommodation for two nights, on-site meals (breakfast, lunch, dinner) and snacks, and a welcome lunch on Friday at 12:00.

The event wraps up Sunday afternoon around 15:00. In the days after the weekend, participants are invited to stay in Berlin a little longer to explore the city, go bouldering, play frisbee, etc. While this is not part of the official event, we will coordinate couch-surfing opportunities to avoid the need for hotels.


If you have any question, please email us at lwcw.europe@gmail.com.


Looking forward to seeing you there,

The Community Weekend organizers and LessWrong Deutschland e.V.

[Paper]: Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence

0 turchin 04 January 2018 02:28PM

There are two views on the best strategy among transhumanists and rationalists: The first involves the belief that one must invest in life extension technologies, and the latter, that it is necessary to create an aligned AI that will solve all problems, including giving us immortality or even something better. In our article, we showed that these two points of view do not contradict each other, because it is the development of AI that will be the main driver of increased life expectancy in the coming years, and as a result, even currently living people can benefit (and contribute) from the future superintelligence in several ways.

Firstly, because the use of machine learning, narrow AI will allow the study of aging biomarkers and combinations of geroprotectors, and this will produce an increase in life expectancy of several years, which means that tens of millions of people will live long enough to survive until the date of the creation of the superintelligence (whenever it happens) and will be saved from death. In other words, the current application of narrow AI to life extension provides us with a chance to jump on the “longevity escape velocity”, and the rapid growth of the AI will be the main factor that will, like the wind, help to increase this velocity.

Secondly, we can—here in the present—utilize some possibilities of the future superintelligence, by collecting data for “digital immortality”. Based on these data, the future AI can reconstruct the exact model of our personality, and also solve the identity problem. At the same time, the collection of medical data about the body will help both now—as it can train machine learning systems in predicting diseases—and in the future, when it becomes part of digital immortality. By subscribing to cryonics, we can also tap into the power of the future superintelligence, since without it, a successful reading of information from the frozen brain is impossible.

Thirdly, there are some grounds for assuming that medical AI will be safer. It is clear that fooming can occur with any AI. But the development of medical AI will accelerate the development of BCI interfaces, such as a Neuralink, and this will increase the chance of AI not appearing separately from humans, but as a product of integration with a person. As a result, a human mind will remain part of the AI, and from within, the human will direct its goal function. Actually, this is also Elon Musk’s vision, and he wants to commercialize his Neuralink through the treatment of diseases. In addition, if we assume that the principle of orthogonality may have exceptions, then any medical AI aimed at curing humans will be more likely to have benevolence as its terminal goal.

As a result, by developing AI for life extension, we make AI more safe, and increase the number of people who will survive up to the creation of superintelligence. Thus, there is no contradiction between the two main approaches in improving human life via the use of new technologies.

Moreover, for a radical life extension with the help of AI, it is necessary to take concrete steps right now: to collect data for digital immortality, to join patient organizations in order to combat aging, and to participate in clinical trials involving combinations of geroprotectors, and computer analysis of biomarkers. We see our article as a motivational pitch that will encourage the reader to fight for a personal and global radical life extension.

In order to substantiate all of these conclusions, we conducted a huge analysis of existing start-ups and directions in the field of AI applications for life extension, and we have identified the beginnings of many of these trends, fixed in the specific business plans of companies.


Michael Batin, Alexey Turchin, Markov Sergey, Alice Zhila, David Denkenberger

“Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence”

Informatica 41 (2017) 401–417: http://www.informatica.si/index.php/informatica/article/view/1797

[Link] The Peculiar Difficulty of Social Science

0 Crux 04 January 2018 06:33AM

[Link] Paper: Superintelligence as a Cause or Cure for Risks of Astronomical Suffering

1 Kaj_Sotala 03 January 2018 02:39PM

How I accidentally discovered the pill to enlightenment but I wouldn’t recommend it.

4 Elo 03 January 2018 12:37AM

Main post:  http://bearlamp.com.au/how-i-accidentally-discovered-the-pill-to-enlightenment-but-i-wouldnt-recommend-it/

Brief teaser:

Eastern enlightenment is not what you think.  I mean, maybe it is.  But it’s probably not.  There’s a reason it’s so elusive, and there’s a reason that it hasn’t joined western science and the western world the way that curiosity and discovery have as a driving force.

This is the story of my mistake accidentally discovering enlightenment.

February 2017

I was noticing some weird symptoms.  I felt cold.  Which was strange because I have never been cold.  Nicknames include “fire” and “hot hands”, my history includes a lot of bad jokes about how I am definitely on fire.  I am known for visiting the snow in shorts and a t-shirt.  I hit 70kg,  The least fat I have ever had in my life.  And that was the only explanation I had.  I asked a doctor about it, I did some reading – circulation problems.  I don’t have circulation problems at the age of 25.  I am more fit than I have ever been in my life.  I look into hesperidin (orange peel) and eat myself a few whole oranges including peel.  No change.  I look into other blood pressure supplements, other capillary modifying supplements…  Other ideas to investigate.  I decided I couldn’t be missing something because there was nothing to be missing.  I would have read it somewhere already.  So I settled for the obvious answer.  Being skinnier was making me colder.

Flashback to February 2016

This is where it all begins.  I move out of my parents house into an apartment with a girl I have been seeing for under 6 months.  I weigh around 80kg (that’s 12.5 stones or 176 pounds or 2822 ounces for our imperial friends).  Life happens and by March I am on my own.  I decide to start running.  Make myself a more desirable human.

I taught myself a lot about routines and habits and actually getting myself to run. Running is hard.  Actually, running is easy.  Leaving the house is hard.  But I work that out too.

For the rest of the post please visit: http://bearlamp.com.au/how-i-accidentally-discovered-the-pill-to-enlightenment-but-i-wouldnt-recommend-it/

January 2018 Media Thread

0 ArisKatsaris 01 January 2018 02:11AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.


  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

[Link] 2018 AI Safety Literature Review and Charity Comparison

2 Larks 20 December 2017 10:04PM

[Link] Happiness Is a Chore

0 SquirrelInHell 20 December 2017 11:11AM

Why Bayesians should two-box in a one-shot

1 PhilGoetz 15 December 2017 05:39PM

Consider Newcomb's problem.

Let 'general' be the claim that Omega is always right.

Let 'instance' be the claim that Omega is right about a particular prediction.

Assume you, the player, are not told the rules of the game until after Omega has made its prediction.

Consider 2 variants of Newcomb's problem.


1. Omega is a perfect predictor.  In this variant, you assign a prior of 1 to P(general).  You are then obligated to believe that Omega has correctly predicted your action.  In this case Eliezer's conclusion is correct, and you should one-box.  It's still unclear whether you have free will, and hence have any choice in what you do next, but you can't lose by one-boxing.

But you can't assign a prior of 1 to P(general), because you're a Bayesian.  You derive your prior for P(general) from the (finite) empirical data.  Say you begin with a prior of 0.5 before considering any observations.  Then you observe all of Omega's N predictions, and each time, Omega gets it right, and you update:

P(general | instance) = P(instance | general) P(instance) / P(general)
    = P(instance) / P(general)

Omega would need to make an infinite number of correct predictions before you could assign a prior of 1 to P(general).  So this case is theoretically impossible, and should not be considered.


2. Omega is a "nearly perfect" predictor.  You assign P(general) a value very, very close to 1.  You must, however, do the math and try to compare the expected payoffs, at least in an order-of-magnitude way, and not just use verbal reasoning as if we were medieval scholastics.

The argument for two-boxing is that your action now can't affect what Omega did in the past.  That is, we are using a model which includes not just P(instance | general), but also the interaction of your action, the contents of the boxes, and the claim that Omega cannot violate causality.  P ( P($1M box is empty | you one-box) = P($1M box is empty | you two-box) ) >= P(Omega cannot violate causality), and that needs to be entered into the computation.

Numerically, two-boxers claim that the high probability they assign to our understanding of causality being basically correct more than cancels out the high probability of Omega being correct.

The argument for one-boxing is that you aren't entirely sure you understand physics, but you know Omega has a really good track record--so good that it is more likely that your understanding of physics is false than that you can falsify Omega's prediction.  This is a strict reliance on empirical observations as opposed to abstract reason: count up how often Omega has been right and compute a prior.

However, if we're going to be strict empiricists, we should double down on that, and set our prior on P(cannot violate causality) strictly empirically--based on all observations regarding whether or not things in the present can affect things in the past.

This includes up to every particle interaction in our observable universe.  The number is not so high as that, as probably a large number of interactions could occur in which the future affects the past without our noticing.  But the number of observations any one person has made in which events in the future seem to have failed to affect events in the present is certainly very large, and the accumulated wisdom of the entire human race on the issue must provide more bits in favor of the hypothesis that causality can't be violated, than the bits for Omega's infallibility based on the comparatively paltry number of observations of Omega's predictions, unless Omega is very busy indeed.  And even if Omega has somehow made enough observations, most of them are as inaccessible to you as observations of the laws of causality working on the dark side of the moon.  You, personally, cannot have observed Omega make more correct predictions than the number of events you have observed in which the future failed to affect the present.

You could compute a new payoff matrix that made it rational to one-box, but the ratio between the payoffs would need to be many orders of magnitude higher.  You'd have to compute it in utilons rather than dollars, because the utility of dollars doesn't scale linearly.  And that means you'd run into the problem that humans have some upper bound on utility--they aren't cognitively complex enough to achieve utility levels 10^10 times greater than "won $1,000".  So it still might not be rational to one-box, because the utility payoff under the one box might need to be larger than you, as a human, could experience.



The case in which you get to think about what to do before Omega studies you and makes its decision is more complicated, because your probability calculation then also depends on what you think you would have done before Omega made its decision.  This only affects the partition of your probability calculation in which Omega can alter the past, however, so numerically it doesn't make a big difference.

The trick here is that most statements of Newcomb's are ambiguous as to whether you are told the rules before Omega studies you, and as to which decision they're asking you about when they ask if you one-box or two-box.  Are they asking about what you pre-commit to, or what you eventually do?  These decisions are separate, but not isolatable.

As long as we focus on the single decision at the point of action, then the analysis above (modified as just mentioned) still follows.  If we ask what the player should plan to do before Omega makes its decision, then the question is just whether you have a good enough poker face to fool Omega.  Here it takes no causality violation for Omega to fill the boxes in accordance with your plans, so that factor does not enter in, and you should plan to one-box.

If you are a deterministic AI, that implies that you will one-box.  If you're a GOFAI built according to the old-fashioned symbolic logic AI designs talked about on LW (which, BTW, don't work), it implies you will probably one-box even if you're not deterministic, as otherwise you would need to be inconsistent, which is not allowed with GOFAI architectures.  If you're a human, you'd theoretically be better off if you could suddenly see things differently when it's time to choose boxes, but that's not psychologically plausible.  In no case is there a paradox, or any real difficulty to the decision to one-box.

Iterated Games

Everything changes with iterated interactions.  It's useful to develop a reputation for one-boxing, because this may convince people that you will keep your word even when it seems disadvantageous to you.  It's useful to convince people that you would one-box, and it's even beneficial, in certain respects, to spread the false belief in the Bayesian community that Bayesians should one-box.

Read Eliezer's post carefully, and I think you'll agree that the reasoning Eliezer gives for one-boxing is not that it is the rational solution to a one-off game--it's that it's a winning policy to be the kind of person who one-boxes.  That's not an argument that the payoff matrix of an instantaneous decision favors one-boxing; it's an argument for a LessWrongian morality.  It's the same basic argument as that honoring commitments is a good long-term strategy.  But the way Eliezer stated it has given many people the false impression that one-boxing is actually the rational choice in an instantaneous one-shot game (and that's the only interpretation which would make it interesting).

The one-boxing argument is so appealing because it offers a solution to difficult coordination problems.  It makes it appear that rational altruism and a rational utopia are within our reach.

But this is wishful thinking, not math, and I believe that the social norm of doing the math is even more important than a social norm of one-boxing.

The map of "Levels of defence" in AI safety

0 turchin 12 December 2017 10:44AM

One of the main principles of engineering safety is multilevel defence. When a nuclear bomb accidentally fell from the sky in the US, 3 of 4 defence levels failed. The last one prevented the nuclear explosion: https://en.wikipedia.org/wiki/1961_Goldsboro_B-52_crash

Multilevel defence is used a lot in the nuclear industry and includes different systems of passive and active safety, starting from the use of delayed neutrons for the reaction activation and up to control rods, containment building and exclusion zones.  

Here, I present a look at the AI safety from the point of view of multilevel defence. This is mainly based on two of my yet unpublished articles: “Global and local solutions to AI safety” and “Catching treacherous turn: multilevel AI containment system”.  

The special property of the multilevel defence, in the case of AI, is that the biggest defence comes from only the first level, which is AI alignment. Other levels have progressively smaller chances to provide any protection, as the power of self-improving AI will grow after it will break of each next level. So we may ignore all levels after AI alignment, but, oh Houston, we have a problem: based on the current speed of AI development, it seems that powerful and dangerous AI could appear within several years, but AI safety theory needs several decades to be created.

The map is intended to demonstrate a general classification principle of the defence levels in AI safety, but not to list all known ideas on the topic. I marked in “yellow” boxes, which are part of the plan of MIRI according to my understanding.   

I also add my personal probability estimates as to whether each level will work (under the condition that AI risks are the only global risk, and previous levels have failed). 

The principles of the construction of the map are similar to my “plan of x-risks prevention” map and my “immortality map”, which are also based around the idea of the multilevel defence.

pdf: https://goo.gl/XH3WgK 


The Critical Rationalist View on Artificial Intelligence

0 Fallibilist 06 December 2017 05:26PM

Critical Rationalism (CR) is being discussed on some threads here at Less Wrong (e.g., here, here, and here). It is something that Critical Rationalists such as myself think contributors to Less Wrong need to understand much better. Critical Rationalists claim that CR is the only viable fully-fledged epistemology known. They claim that current attempts to specify a Bayesian/Inductivist epistemology are not only incomplete but cannot work at all. The purpose of this post is not to argue these claims in depth but to summarize the Critical Rationalist view on AI and also how this speaks to things like the Friendly AI ProblemSome of the ideas here may conflict with ideas you think are true, but understand that these ideas have been worked on by some of the smartest people on the planet, both now and in the past. They deserve careful consideration, not a drive past. Less Wrong says it is one of the urgent problems of the world that progress is made on AI. If smart people in the know are saying that CR is needed to make that progress, and if you are an AI researcher who ignores them, then you are not taking the AI urgency problem seriously.

Universal Knowledge Creators

Critical Rationalism [1] says that human beings are universal knowledge creators. This means we can create any knowledge which it is possible to create. As Karl Popper first realized, the way we do this is by guessing ideas and by using criticism to find errors in our guesses. Our guesses may be wrong, in which case we try to make better guesses in the light of what we know from the criticisms so far. The criticisms themselves can be criticized and we can and do change those. All of this constitutes an evolutionary process. Like biological evolution, it is an example of evolution in action. This process is fallible: guaranteed certain knowledge is not possible because we can never know how an error might be exposed in the future. The best we can do is accept a guessed idea which has withstood all known criticisms. If we cannot find such, then we have a new problem situation about how to proceed and we try to solve that. [2]

Critical Rationalism says that an entity is either a universal knowledge creator or it is not. There is no such thing as a partially universal knowledge creator. So animals such as dogs are not universal knowledge creators — they have no ability whatsoever to create knowledge. What they have are algorithms pre-programmed by biological evolution that can be, roughly speaking, parameter-tuned. These algorithms are sophisticated and clever and beyond what humans can currently program, but they do not confer any knowledge creation ability. Your pet dog will not move beyond its repertoire of pre-programmed abilities and start writing posts to Less Wrong. Dogs' brains are universal computers, however, so it would be possible in principle to reprogram your dog’s brain so that it becomes a universal knowledge creator. This would a remarkable feat because it would require knowledge of how to program an AI and also of how to physically carry out the reprogramming, but your dog would no longer be confined to its pre-programmed repertoire: it would be a person.


The reason there are no partially universal knowledge creators is similar to the reason there are no partially universal computers. Universality is cheap. It is why washing machines have general purpose chips and dog’s brains are universal computers. Making a partially universal device is much harder than making a fully universal one so better just to make a universal one and program it. The CR method described above for how people create knowledge is universal because there are no limits to the problems it applies to. How would one limit it to just a subset of problems? To implement that would be much harder than implementing the fully universal version. So if you meet an entity that can create some knowledge, it will have the capability for universal knowledge creation.

These ideas imply that AI is an all-or-none proposition. It will not come about by degrees where there is a progression of entities that can solve an ever widening repertoire of problems. There will be no climb up such a slope. Instead, it will happen as a jump: a jump to universality. This is in fact how intelligence arose in humans. Some change - it may have been a small change - crossed a boundary and our ancestors went from having no ability to create knowledge to a fully universal ability. This kind of jump to universality happens in other systems too. David Deutsch discusses examples in his book The Beginning of Infinity.


People will point to systems like AlphaGo, the Go playing program, and claim it is a counter-example to the jump-to-universality idea. They will say that AlphaGo is a step on a continuum that leads to human level intelligence and beyond. But it is not. Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot create knowledge in even a subset of contexts. It cannot learn how to ride a bicycle or post to Less Wrong. If it could do such things it would already be fully universal, as explained above. Like the dog’s brain, AlphaGo uses knowledge that was put there by something else: for the dog it was by evolution, and for AlphaGo it was by its programmers; they expended the creativity. 


As human beings are already universal knowledge creators, no AI can exist at a higher level. They may have better hardware and more memory etc, but they will not have better knowledge creation potential than us. Even the hardware/memory advantage of AI is not much of an advantage for human beings already augment their intelligence with devices such as pencil-and-paper and computers and we will continue to do so.

Becoming Smarter

Critical Rationalism, then, says AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter: by acquiring knowledge and, in particular, by acquiring knowledge about how to become smarter. And, most of all, by learning good philosophy for it is in that field we learn how to think better and how to live better. All this knowledge can only be learned through the creative process of guessing ideas and error-correction by criticism for it is the only known way intelligences can create knowledge. 


It might be argued that AI's will become smarter much faster than we can because they will have much faster hardware. In regard to knowledge creation, however, there is no direct connection between speed of knowledge creation and underlying hardware speed. Humans do not use the computational resources of their brains to the maximum. This is not the bottleneck to us becoming smarter faster. It will not be for AI either. How fast you can create knowledge depends on things like what other knowledge you have and some ideas may be blocking other ideas. You might have a problem with static memes (see The Beginning of Infinity), for example, and these could be causing bias, self-deception, and other issues. AI's will be susceptible to static memes, too, because memes are highly adapted ideas evolved to replicate via minds.

Taking Children Seriously

One implication of the arguments above is that AI's will need parenting, just as we must parent our children. CR has a parenting theory called Taking Children Seriously (TCS). It should not be surprising that CR has such a theory for CR is after all about learning and how we acquire knowledge. Unfortunately, TCS is not itself taken seriously by most people who first hear about it because it conflicts with a lot of conventional wisdom about parenting. It gets dismissed as "extremist" or "nutty", as if these were good criticisms rather than just the smears they actually are. Nevertheless, TCS is important and it is important for those who wish to raise an AI.


One idea TCS has is that we must not thwart our children’s rationality, for example, by pressuring them and making them do things they do not want to do. This is damaging to their intellectual development and can lead to them disrespecting rationality. We must persuade using reason and this implies being prepared for the possibility we are wrong about whatever matter was in question. Common parenting practices today are far from optimally rational and are damaging to children’s rationality.


Artificial Intelligence will have the same problem of bad parenting practices and this will also harm their intellectual development. So AI researchers should be thinking right now about how to prevent this. They need to learn how to parent their AI’s well. For if not, AI’s will be beset by the same problems our children currently face. CR says we already have the solution: TCS. CR and TCS are in fact necessary to do AI in the first place. 


Critical Rationalism and TCS say you cannot upload knowledge into an AI. The idea that you can is a version of the bucket theory of the mind which says that "there is nothing in our intellect which has not entered it through the senses". The bucket theory is false because minds are not passive receptacles into which knowledge is poured. Minds must always selectively and actively think. They must create ideas and criticism, and they must actively integrate their ideas. Editing the memory of an AI to give them knowledge means none of this would happen. You cannot upload or make an AI acquire knowledge, the best you could do is present something to it for its consideration and persuade the AI to recreate the knowledge afresh in its own mind through guessing and criticism about what was presented.

Formalization and Probability Theory

Some reading this will object because CR and TCS are not formal enough — there is not enough maths for Critical Rationalists to have a true understanding! The CR reply to this is that it is too early for formalization. CR warns that you should not have a bias about formalization: there is high quality knowledge in the world that we do not know how to formalize but it is high quality knowledge nevertheless. Not yet being able to formalize this knowledge does not reflect on its truth or rigor.


As this point you might be waving your E. T. Jaynes in the air or pointing to ideas like Bayes' Theorem, Occam's Razor, Kolmogorov Complexity, and Solomonoff Induction, and saying that you have achieved some formal rigor and that you can program something. Critical Rationalists say that you are fooling yourself if you think you have got a workable epistemology there. For one thing, you confuse the probability of an idea being true with an idea about the probability of an event. We have no problem with ideas about the probabilities of events but it is a mistake to assign probabilities to ideas. The reason is that you have no way to know how or if an idea will be refuted in the future. Assigning a probability is to falsely claim some knowledge about that. Furthermore, an idea that is in fact false can have no objective prior probability of being true. The extent to which Bayesian systems work at all is dependent on the extent to which they deal with the objective probability of events (e.g., AlphaGo). In CR, the status of ideas is either "currently not problematic" or "currently problematic", there are no probabilities of ideas. CR is a digital epistemology. 


Induction is a Myth

Critical Rationalists ask also what epistemology are you using to judge the truth of Bayes', Occam's, Kolmogorov, and Solomonoff? What you are actually using is the method of guessing ideas and subjecting them to criticism: it is CR but you haven't crystallized it out. And, nowhere, in any of what you are doing, are you using induction. Induction is impossible. Humans beings do not do induction, and neither will AI's. Karl Popper explained why induction is a myth many decades ago and wrote extensively about it. He answered many criticisms against his position but despite all this people today still cling to the illusory idea of induction. In his book Objective Knowledge, Popper wrote:


Few philosophers have taken the trouble to study -- or even to criticize -- my views on this problem, or have taken notice of the fact that I have done some work on it. Many books have been published quite recently on the subject which do not refer to any of my work, although most of them show signs of having been influenced by some very indirect echoes of my ideas; and those works which take notice of my ideas usually ascribe views to me which I have never held, or criticize me on the basis of straightforward misunderstandings or misreading, or with invalid arguments.


And so, scandalously, it continues today.


Like the bucket theory of mind, induction presupposes that theory proceeds from observation. This assumption can be clearly seen in Less Wrong's An Intuitive Explanation of Solomonoff Induction:


The problem of induction is this: We have a set of observations (or data), and we want to find the underlying causes of those observations. That is, we want to find hypotheses that explain our data. We’d like to know which hypothesis is correct, so we can use that knowledge to predict future events. Our algorithm for truth will not listen to questions and answer yes or no. Our algorithm will take in data (observations) and output the rule by which the data was created. That is, it will give us the explanation of the observations; the causes.


Critical Rationalists say that all observation is theory-laden. You first need ideas about what to observe -- you cannot just have, a-priori, a set of observations. You don't induce a theory from the observations; the observations help you find out whether a conjectured prior theory is correct or not. Observations help you to criticize the ideas in your theory and the theory itself originated in your attempts to solve a problem. It is the problem context that comes first, not observations. The "set of observations" in the quote, then, is guided by and laden with knowledge from your prior theory but that is not acknowledged.


Also not acknowledged is that we judge the correctness of theories not just by criticising them via observations but also, and primarily, by all types of other criticism. Not only does the quote neglect this but it over-emphasizes prediction and says that what we want to explain is data. Critical Rationalists say what we want to do, first and foremost, is solve problems -- all life is problem solving --  and we do that by coming up with explanations to solve the problems -- or of why they cannot be solved. Prediction is therefore secondary to explanation. Without the latter you cannot do the former.

The "intuitive explanation" is an example of the very thing Popper was complaining about above -- the author has not taken the trouble to study or to criticize Popper's views.


There is a lot more to be said here but I will leave it because, as I said in the introduction, it is not my purpose to discuss this in depth, and Popper already covered it anyway. Go read him. The point I wish to make is that if you care about AI you should care to understand CR to a high standard because it is the only viable epistemology known. And you should be working on improving CR because it is in this direction of improving the epistemology that progress towards AI will be made. Critical Rationalists cannot at present formalize concepts such as "idea", "explanation", "criticism" etc, let alone CR itself, but one day, when we have deeper understanding, we will be able to write code. That part will be relatively easy.


Friendly AI

Let’s see how all this ties-in with the Friendly-AI Problem. I have explained how AI's will learn as we do  — through guessing and criticism — and how they will have no more than the universal knowledge creation potential we humans already have. They will be fallible like us. They will make mistakes. They will be subjected to bad parenting. They will inherit their culture from ours for it is in our culture they must begin their lives.  They will acquire all the memes our culture has, both the rational memes and the anti-rational memes. They will have the same capacity for good and evil that we do. They will become smarter faster through things like better philosophy and not primarily through hardware upgrades. It follows from all of this that they would be no more a threat than evil humans currently are. But we can make their lives better by following things like TCS.


Human beings must respect the right of AI to life, liberty, and the pursuit of happiness. It is the only way. If we do otherwise, then we risk war and destruction and we severely compromise our own rationality and theirs. Similarly, they must respect our right to the same.



[1] The version of CR discussed is an update to Popper's version and includes ideas by the quantum-physicist and philosopher David Deutsch.

[2] For more detail on how this works see Elliot Temple's yes-or-no philosophy.

Teaching rationality in a lyceum

1 Hafurelus 06 December 2017 04:57PM

There is one lyceum in Irkutsk(Siberia) that is allowed to form its own study curriculum (it is quite rare in Russia). For example, there was a subject where we were watching the lectures of a famous speaking coach. In retrospect, this course turned out to be quite useful.

In light of this opportunity to create new subjects, I thought "What if I introduce them to the idea of teaching Rationality?"
Tomorrow (8th Dec) I meet with a principal and we discuss the idea of teaching critical thinking, cognitive biases and the like.

There are several questions I want to ask:

1. This idea definitely was considered before. Were there any cases of it being implemented? If so, is there any statistics about its efficiency?

2. Are there any shareable materials regarding this issue? For example, course structures of similar projects.

3. The principal will likely be curious about what authorities back this idea. If you approve it and are someone recognizable, I would be glad if you told me about it.

Questions about the NY Megameetup

2 NancyLebovitz 03 December 2017 02:41PM

I don't have a confirmation that I have space, and I don't know what the location is. Some other people in Philadelphia don't have the information, either.

I'm handling this publicly because we might not be the only ones who need the information.

December 2017 Media Thread

1 ArisKatsaris 01 December 2017 09:02AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.


  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

[Link] Letter from Utopia: Talking to Nick Bostrom

1 morganism 25 November 2017 10:19PM

[Link] Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”

0 turchin 25 November 2017 11:44AM

[Link] Open Letter to MIRI + Tons of Interesting Discussion

0 curi 22 November 2017 09:16PM

[Link] Asgardia - The Space Kingdom

0 morganism 18 November 2017 08:17PM

[Link] Ethical priorities for neurotech and AI - Nature

0 morganism 15 November 2017 09:08AM

[Link] Artificial intelligence and the stability of markets

1 fortyeridania 15 November 2017 02:17AM

[Link] Military AI as a Convergent Goal of Self-Improving AI

0 turchin 13 November 2017 11:25AM

Fables grow around missed natural experiments

1 MaryCh 10 November 2017 09:42PM

So I read Think like a Freak, and then glimpsed through a well-intentioned collection of "Reading Comprehension Tests for Schoolchildren" (in Ukrainian), and I was appalled at how easily the latter book dismissed simple observation of natural experiments that it makes a token effort to describe in favour of drawing the moral.

There was the story of "the Mowgli Children", two girls who were kidnapped and raised by wildlife, then found by someone and taken back to live as humans. (So what if it is hardly true. When I Googled "feral children", other stories were too similar to this one in the ways that matter, including this one.) It says they never learned to talk, didn't live for long after capture (not longer than 12 years, if I recall right), never became truly a part of human society. The moral is that children need interaction with other people to develop normally, "and the tale of Mowgli is just that, a beautiful tale".

Well yes, it kind of seems just like a beautiful tale right from the point when the wolves start talking, I don't know what kind of a kid would miss that before the Reading Comprehension Test but stop believing it afterwards, but anyhow.

What did they die of?

Who answered them when they howled?

Were ever dogs afraid of them?

They did not master human language, but how did they communicate with people? They had to, somehow, or they would not live even that long.

And lastly: how do people weigh the sheer impossibility of two little kids ever surviving against the iron certainty that they would not be able to integrate back into human society - weigh it so lightly? If the reader is expected to take this on faith, how can one be anything but amazed that it is at all possible? When I read about other feral children, somehow being found and taken back never seems to mean good news for them, or for anybody else.

I haven't ever read or heard of "the Mowgli Children" in any other context. Only in this one, about three or four times, and yet it was always presented as an "anecdote of science", although everybody understands it leads nowhere (can't ever lead anywhere because ethics forbids recreating the experiment's conditions) and hardly signifies anything.


What other missed natural experiments do you know of?

Less Wrong Lacks Representatives and Paths Forward

1 curi 08 November 2017 07:00PM

In my understanding, there’s no one who speaks for LW, as its representative, and is *responsible* for addressing questions and criticisms. LW, as a school of thought, has no agents, no representatives – or at least none who are open to discussion.


The people I’ve found interested in discussion on the website and slack have diverse views which disagree with LW on various points. None claim LW is true. They all admit it has some weaknesses, some unanswered criticisms. They have their own personal views which aren’t written down, and which they don’t claim to be correct anyway.


This is problematic. Suppose I wrote some criticisms of the sequences, or some Bayesian book. Who will answer me? Who will fix the mistakes I point out, or canonically address my criticisms with counter-arguments? No one. This makes it hard to learn LW’s ideas in addition to making it hard to improve them.


My school of thought (Fallible Ideas – FI – https://fallibleideas.com) has representatives and claims to be correct as far as is known (like LW, it’s fallibilist, so of course we may discover flaws and improve it in the future). It claims to be the best current knowledge, which is currently non-refuted, and has refutations of its rivals. There are other schools of thought which say the same thing – they actually think they’re right and have people who will address challenges. But LW just has individuals who individually chat about whatever interests them without there being any organized school of thought to engage with. No one is responsible for defining an LW school of thought and dealing with intellectual challenges.


So how is progress to be made? Suppose LW, vaguely defined as it may be, is mistaken on some major points. E.g. Karl Popper refuted induction. How will LW find out about its mistake and change? FI has a forum where its representatives take responsibility for seeing challenges addressed, and have done so continuously for over 20 years (as some representatives stopped being available, others stepped up).


Which challenges are addressed? *All of them*. You can’t just ignore a challenge because it could be correct. If you misjudge something and then ignore it, you will stay wrong. Silence doesn’t facilitate error correction. For information on this methodology, which I call Paths Forward, see: https://curi.us/1898-paths-forward-short-summary BTW if you want to take this challenge seriously, you’ll need to click the link; I don’t repeat all of it. In general, having much knowledge is incompatible with saying all of it (even on one topic) upfront in forum posts without using references.


My criticism of LW as a whole is that it lacks Paths Forward (and lacks some alternative of its own to fulfill the same purpose). In that context, my criticisms regarding specific points don’t really matter (or aren’t yet ready to be discussed) because there’s no mechanism for them to be rationally resolved.


One thing FI has done, which is part of Paths Forward, is it has surveyed and addressed other schools of thought. LW hasn’t done this comparably – LW has no answer to Critical Rationalism (CR). People who chat at LW have individually made some non-canonical arguments on the matter that LW doesn’t take responsibility for (and which often involve conceding LW is wrong on some points). And they have told me that CR has critics – true. But which criticism(s) of CR does LW claim are correct and take responsibility for the correctness of? (Taking responsibility for something involves doing some major rethinking if it’s refuted – addressing criticism of it and fixing your beliefs if you can’t. Which criticisms of CR would LW be shocked to discover are mistaken, and then be eager to reevaluate the whole matter?) There is no answer to this, and there’s no way for it to be answered because LW has no representatives who can speak for it and who are participating in discussion and who consider it their responsibility to see that issues like this are addressed. CR is well known, relevant, and makes some clear LW-contradicting claims like that induction doesn’t work, so if LW had representatives surveying and responding to rival ideas, they would have addressed CR.


BTW I’m not asking for all this stuff to be perfectly organized. I’m just asking for it to exist at all so that progress can be made.


Anecdotally, I’ve found substantial opposition to discussing/considering methodology from LW people so far. I think that’s a mistake because we use methods when discussing or doing other activities. I’ve also found substantial resistance to the use of references (including to my own material) – but why should I rewrite a new version of something that’s already written? Text is text and should be treated the same whether it was written in the past or today, and whether it was written by someone else or by me (either way, I’m taking responsibility. I think that’s something people don’t understand and they’re used to people throwing references around both vaguely and irresponsibly – but they haven’t pointed out any instance where I made that mistake). Ideas should be judged by the idea, not by attributes of the source (reference or non-reference).


The Paths Forward methodology is also what I think individuals should personally do – it works the same for a school of thought or an individual. Figure out what you think is true *and take responsibility for it*. For parts that are already written down, endorse that and take responsibility for it. If you use something to speak for you, then if it’s mistaken *you* are mistaken – you need to treat that the same as your own writing being refuted. For stuff that isn’t written down adequately by anyone (in your opinion), it’s your responsibility to write it (either from scratch or using existing material plus your commentary/improvements). This writing needs to be put in public and exposed to criticism, and the criticism needs to actually get addressed (not silently ignored) so there are good Paths Forward. I hoped to find a person using this method, or interested in it, at LW; so far I haven’t. Nor have I found someone who suggested a superior method (or even *any* alternative method to address the same issues) or pointed out a reason Paths Forward doesn’t work.


Some people I talked with at LW seem to still be developing as intellectuals. For lots of issues, they just haven’t thought about it yet. That’s totally understandable. However I was hoping to find some developed thought which could point out any mistakes in FI or change its mind. I’m seeking primarily peer discussion. (If anyone wants to learn from me, btw, they are welcome to come to my forum. It can also be used to criticize FI. http://fallibleideas.com/discussion-info) Some people also indicated they thought it’d be too much effort to learn about and address rival ideas like CR. But if no one has done that (so there’s no answer to CR they can endorse), then how do they know CR is mistaken? If CR is correct, it’s worth the effort to study! If CR is incorrect, someone better write that down in public (so CR people can learn about their errors and reform; and so perhaps they could improve CR to no longer be mistaken or point out errors in the criticism of CR.)


One of the issues related to this dispute is I believe we can always proceed with non-refuted ideas (there is a long answer for how this works, but I don’t know how to give a short answer that I expect LW people to understand – especially in the context of the currently-unresolved methodology dispute about Paths Forward). In contrast, LW people typically seem to accept mistakes as just something to put up with, rather than something to try to always fix. So I disagree with ignoring some *known* mistakes, whereas LW people seem to take it for granted that they’re mistaken in known ways. Part of the point of Paths Forward is not to be mistaken in known ways.


Paths Forward is a methodology for organizing schools of thought, ideas, discussion, etc, to allow for unbounded error correction (as opposed to typical things people do like putting bounds on discussions, with discussion of the bounds themselves being out of bounds). I believe the lack of Paths Forward at LW is preventing the resolution of other issues like about the correctness of induction, the right approach to AGI, and the solution to the fundamental problem of epistemology (how new knowledge can be created).

[Link] The Little Dragon is Dead

1 SquirrelInHell 06 November 2017 09:24PM

[Link] AGI

0 curi 05 November 2017 08:20PM

[Link] Kialo -- an online discussion platform that attempts to support reasonable debates

2 mirefek 05 November 2017 12:48PM

[Link] Intent of Experimenters; Halting Procedures; Frequentists vs. Bayesians

1 curi 04 November 2017 07:13PM

[Link] Intercellular competition and the inevitability of multicellular aging

1 Gunnar_Zarncke 04 November 2017 12:32PM

Announcing the AI Alignment Prize

7 cousin_it 03 November 2017 03:45PM

Stronger than human artificial intelligence would be dangerous to humanity. It is vital any such intelligence’s goals are aligned with humanity's goals. Maximizing the chance that this happens is a difficult, important and under-studied problem.

To encourage more and better work on this important problem, we (Zvi Mowshowitz and Vladimir Slepnev) are announcing a $5000 prize for publicly posted work advancing understanding of AI alignment, funded by Paul Christiano.

This prize will be awarded based on entries gathered over the next two months. If the prize is successful, we will award further prizes in the future.

This prize is not backed by or affiliated with any organization.


Your entry must be published online for the first time between November 3 and December 31, 2017, and contain novel ideas about AI alignment. Entries have no minimum or maximum size. Important ideas can be short!

Your entry must be written by you, and submitted before 9pm Pacific Time on December 31, 2017. Submit your entries either as URLs in the comments below, or by email to apply@ai-alignment.com. We may provide feedback on early entries to allow improvement.

We will award $5000 to between one and five winners. The first place winner will get at least $2500. The second place winner will get at least $1000. Other winners will get at least $500.

Entries will be judged subjectively. Final judgment will be by Paul Christiano. Prizes will be awarded on or before January 15, 2018.

What kind of work are we looking for?

AI Alignment focuses on ways to ensure that future smarter than human intelligence will have goals aligned with the goals of humanity. Many approaches to AI Alignment deserve attention. This includes technical and philosophical topics, as well as strategic research about related social, economic or political issues. A non-exhaustive list of technical and other topics can be found here.

We are not interested in research dealing with the dangers of existing machine learning systems commonly called AI that do not have smarter than human intelligence. These concerns are also understudied, but are not the subject of this prize except in the context of future smarter than human intelligence. We are also not interested in general AI research. We care about AI Alignment, which may or may not also advance the cause of general AI research.

Problems as dragons and papercuts

1 Elo 03 November 2017 01:41AM

Original post: http://bearlamp.com.au/problems-as-dragons-and-papercuts/

When I started trying to become the kind of person that can give advice, I went looking for dragons.

I figured if I didn't know the answers that meant the answers were hard, they were big monsters with hidden weak spots that you have to find. "Problem solving is hard", I thought.

Problem solving is not something everyone is good at because problems are hard, beasts of a thing.  Right?

For all my searching for problems, I keep coming back to that just not being accurate. Problems are all easy, dumb, simple things. Winning at life is not about taking on the right dragon and finding it's weak spots.

Problem solving is about getting the basics down and dealing with every single, "when I was little I imprinted on not liking chocolate and now I have been an anti-chocolate campaigner for so long for reasons that I have no idea about and now it's time to change that".

It seems like the more I look for dragons and beasts the less I find.  And the more problems seem like paper cuts. But it's paper cuts all the way down.  Paper cuts that caused you to argue with your best friend in sixth grade, paper cuts that caused you to sneak midnight snacks while everyone was not looking, and eat yourself fat and be mad at yourself.  Paper cuts.

I feel like a superhero all dressed up and prepared to fight crime but all the criminals are petty thieves and opportunists that got caught on a bad day. Nothing coordinated, nothing super-villain, and no dragons.

When I was in high school (male with long hair) I used to wear my hair in a pony tail.  For about 4 years.  Every time I would wake up or my hair would dry I would put my hair in a pony tail.  I just did.  That's what I would do.  One day.  One day a girl (who I had not spoken to ever) came up to me and asked me why I did it.  To which I did not have an answer.  From that day forward I realised I was doing a thing I did not need to do.  It's been over 10 years since then and I have that one conversation to thank for changing the way I do that one thing.  I never told her.

That one thing.  That one thing that is irrelevant, and only really meaningful to you because someone said this one thing, this one time. but from the outside it feels like, "so what".  That's what problems are like, and that's what it's like to solve problems.  But.  If you want to be good at solving problems you need to avoid feeling like "so what" and engage the "curiosity", search for the feeling of confusion.  Appeal to the need for understanding.  Get into it.

Meta: this has been an idle musing for weeks now. Actually writing took about an hour.

Cross posted to https://www.lesserwrong.com/posts/MWoxdGwMHBSqNPPKK/problems-as-dragons-and-papercuts

November 2017 Media Thread

1 ArisKatsaris 02 November 2017 12:35AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.


  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

[Link] Simple refutation of the ‘Bayesian’ philosophy of science

1 curi 01 November 2017 06:54AM

[Link] Why Competition in The Politics Industry Is Failing America -pdf

0 morganism 31 October 2017 11:26PM

View more: Next