Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Does your machine mind? Ethics and potential bias in the law of algorithms

Gunnar_Zarncke 28 June 2017 10:08PM

What useless things did you understand recently?

1 cousin_it 28 June 2017 07:32PM

Please reply in the comments with things you understood recently. The only condition is that they have to be useless in your daily life. For example, "I found this idea that defeats procrastination" doesn't count, because it sounds useful and you might be deluded about its truth. Whereas "I figured out how construction cranes are constructed" qualifies, because you aren't likely to use it and it will stay true tomorrow.

I'll start. Today I understood how Heyting algebras work as a model for intuitionistic logic. The main idea is that you represent sentences as shapes. So you might have two sentences A and B shown as two circles, then "A and B" is their intersection, "A or B" is their union, etc. But "A implies B" doesn't mean one circle lies inside the other, as you might think! Instead it's a shape too, consisting of all points that lie outside A or inside B (or both). There were some other details about closed and open sets, but these didn't cause a problem for me, while "A implies B" made me stumble for some reason. I probably won't use Heyting algebras for anything ever, but it was pretty fun to figure out.

Your turn!

PS: please don't feel pressured to post something super advanced. It's really, honestly okay to post basic things, like why a stream of tap water narrows as it falls, or why the sky is blue (though I don't claim to understand that one :-))

Kelly and the Beast

0 sen 28 June 2017 10:02AM

Poor models

Consider two alternative explanations to each of the following questions:

  • Why do some birds have brightly-colored feathers? Because (a) evolution has found that they are better able to attract mates with such feathers or (b) that's just how it is.
  • Why do some moths, after a few generations, change color to that of surrounding man-made structures? Because (a) evolution has found that the change in color helps the moths hide from predators or (b) that's just how it is.
  • Why do some cells communicate primarily via mechanical impulses rather than electrochemical impulses? Because: (a) for such cells, evolution has found a trade-off between energy consumption, information transfer, and information processing such that mechanical impulses are preferred or (b) that's just how it is.
  • Why do some cells communicate primarily via electrochemical impulses rather than mechanical impulses? Because: (a) for such cells, evolution has found a trade-off between energy consumption, information transfer, and information processing such that electrochemical impulses are preferred or (b) that's just how it is.

Clearly the first set of explanations are better, but I'd like to say a few things in defense of the second.

  • The preference of evolution towards one against the other could very likely have nothing to do with mating, predators, energy consumption, information transfer, or information processing. Those are the best theorized guesses we have, and they have no experimental backing.
  • Evolution works as a justification for contradictory phenomena.
  • The second set of explanations are simpler.
  • The second set of explanations have perfect sensitivity, specificity, precision, etc.

If that’s not enough to convince you, then I propose as a middle-ground another alternative explanation for any situation where evolution alone might be used as such: "I don't have a clue." It's more honest, more informative, and it does more to get people to actually investigate open questions, as opposed to pretending those questions have been addressed in any meaningful way.

Less poor models

When people use evolution as a justification for a phenomenon, what they tend to imagine is this:

  • Changes are gradual.
  • Changes occur on the time scale of generations, not individuals.
  • Duplication and termination are, respectively, positively and negatively correlated with the changing thing.

If you agree, then I’m sure the following questions regarding standard evopsych explanations of social signaling phenomenon X should be easy to answer:

  • What indication is there that the change in adoption of X was gradual?
  • What indication is there that change in adoption of X happens on the time scale of generations and not individuals (i.e., that individuals have little influence in their own local adoption of X)?
  • What constitutes duplication and termination? Is the hypothesized chain of correlation short enough or reliable enough to be convincing? 

If you agreed with the decomposition of “evolution” and disagreed with any of the subsequent questions, then your model of evolution might not be consistent, or you may have a preference for unjustified explanations. In conversation, this isn’t really an issue, but perhaps there are some downsides to using inconsistent models for your personal worldviews.

Optimal models

In March 1956, John Kelly described an equation for betting optimally on a coin-toss game weighted in your favor. If you were expected to gain money on average, and if you could place bets repeatedly, then the Kelly bet let you grow your principal at the greatest possible rate.

You can read the paper here: http://www.herrold.com/brokerage/kelly.pdf. It’s important for you to be able to read papers like this… 

The argument goes like this. Given a coin that lands on heads with probability p and tails with probability q=1-p, I let you bet k (a fraction of your total) that the coin will land heads. If you win, I give you b*k. If you lose, I take your k. After n rounds, you will have won on average p*n times, and you will have lost q*n times. Your new total will look like this:

Your bet is optimized when the gradient of this value with respect to k is zero and decreasing in both directions away.

You can check easily that the equation is always concave down when the odds are in your favor and when k is between 0 and 1. Note that there is alway one local maximum: the value of k found above. There is also one undefined value, k=1 (all-in every time), which if you plug into the original equation results in you being broke.

The Kelly bet makes one key assumption: chance is neither with you nor against you. If you play n games, you will win n*p of them, and you will lose n*q. With this assumption, which often aligns closely with reality, your principal will grow fairly reliably, and it will grow exponentially. Moreover, you will never go broke with a Kelly bet.

There is a second answer though that doesn’t make this assumption: go all-in every time. Your expected winnings, summed over all possible coin configurations, will be:

If you run the numbers, you’ll see that this second strategy often beats the Kelly bet on average, though most outcomes result in you being broke.

So I’ll offer you a choice. We’ll play the coin game with a fair coin. You get 3U (utility) for every 1U you bet if you win, and you lose your 1U otherwise. You can play the game with any amount of utility for up to, say, a trillion rounds. Would you use Kelly’s strategy, with which your utility would almost certainly grow exponentially to be far larger than your initial sum, or would you use the second strategy, which performs far, far better on average, though through which you’ll almost certainly end up turning the world into a permanent hell?

This assumes nothing about the utility function other than that utility can reliably be increased and decreased by specific quantities. If you prefer the Kelly bet, then you’re not optimizing for any utility function on average, and so you’re not optimizing for any utility function at all.

Self-conscious ideology

2 casebash 28 June 2017 05:32AM

Operating outside of ideology is extremely hard, if not impossible. Even groups that see themselves as non-ideological, still seem to end up operating within an ideology of some sort.

Take for example Less Wrong. It seems to operate within a few assumptions:

  1.  That studying rationality will provide use with a greater understanding of the world. 
  2. That studying rationality will improve you as a person.
  3. That science is one of our most important tools for understanding the world.


These assumptions are also subject to some criticisms. Here's one criticism for each of the previous points:

  1. But will it or are we dealing with problems that are simply beyond our ability to understand (see epistemic learned helplessness)? Do we really understand how minds work well enough to know whether a mind uploaded would still be "you"?
  2. But religious people are happier.
  3. Hume's critique of induction

I could continue discussing assumptions and possible criticisms, but that would be a distraction from the core point, which is that there are advantages to having a concrete ideology that is aware of it's own limitations, as opposed to an implicit ideology that is beyond all criticism.

Self-conscious ideologies also have other advantages:

  • Quick and easy to write since you don't have to deal with all of the special cases.
  • Easy to share and explain. Imagine trying to explain to someone, "Rationality gives us a better understanding of the world, except when it does not". Okay, I'm exaggerating, epistemic humility typically isn't explained that badly, but it certainly complicates sharing.
  • Easier for people to adopt the ideology as a lens through which to examine the world, without needing to assume that it is literally true.
I wrote this post so that people can create self-conscious ideologies and have something to link to so as to avoid having to write up an explanation themselves. Go out into the world and create =P.

a different perspecive on physics

0 Florian_Dietz 26 June 2017 10:47PM

(Note: this is anywhere between crackpot and inspiring, based on the people I talked to before. I am not a physicist.)

I have been thinking about a model of physics that is fundamentally different from the ones I have been taught in school and university. It is not a theory, because it does not make predictions. It is a different way of looking at things. I have found that this made a lot of things we normally consider weird a lot easier to understand.

Almost every model of physics I have read of so far is based on the idea that reality consists of stuff inside a coordinate system, and the only question is the dimensionality of the coordinate system. Relativity talks about bending space, but it still treats the existence of space as the norm. But what if there were no dimensions at all?


If we assume that the universe is computable, then dimension-based physics, while humanly intuitive, are unnecessarily complicated. To simulate dimension-based physics, one first needs to define real numbers, which is complicated, and requires that numbers be stored with practically infinite precision. Occam's Razor argues against this.

A graph model in contrast would be extremely simple from a computational point of view: a set of nodes, each with a fixed number of attributes, plus a set of connections between the nodes, suffices to express the state of the universe. Most importantly, it would suffice for the attributes of nodes to be simple booleans or natural numbers, which are much easier to compute than real numbers. Additionally, transition functions to advance in time would be easy to define as well as they could just take the form of a set of if-then rules that are applied to each node in turn. (these transition functions roughly correspond to physical laws in more traditional physical theories)


Model reality as a graph structure. That is to say, reality at a point of time is a set of nodes, a set of connections between those nodes, and a set of attributes for each node. There are rules for evolving this graph over time that might be as simple as those in Conway's game of life, but they lead to very complex results due to the complicated structure of the graph.

Connections between nodes can be created or deleted over time according to transition functions.

What we call particles are actually patterns of attributes on clusters of nodes. These patterns evolve over time according to transition functions. Also, since particles are patterns instead of atomic entities, they can in principle be created and destroyed by other patterns.

Our view of reality as (almost) 3-dimensional is an illusion created by the way the nodes connect to each other: This can be done if a pattern exists that matches these criterions: change an arbitrarily large graph (a set of vertices, a set of edges), such that the following is true:

-There exists a mapping f(v) of vertices to (x,y,z) coordinates such that for any pair of vertices m,n: the euclidean distance of f(m) and f(n) is approximately equal to the length of the shortest path between m and n (inaccuracies are fine so long as the distance is small, but the approximation should be good at larger distances).

A dimensionless graph model would have no contradiction between quantum physics and relativity. Quantum effects happen when patterns (particles) spread across nodes that still have connections between them besides those connections that make up the primary 3D grid. This also explains why quantum effects exist mostly on small scales: the pattern enforcing 3D grid connections tends to wipe out the entanglements between particles. Space dilation happens because the patterns caused by high speed travel cause the 3D grid pattern to become unstable and the illusion that dimensions exist breaks down. There is no contradiction between quantum physics and relativity if the very concept of distance is unreliable. Time dilation is harder to explain, but can be done. This is left as an exercise to the reader, since I only really understood this graph-based point of view when I realised how that works, and don't want to spoiler the aha-moment for you.


This is not really a theory. I am not making predictions, I provide no concrete math, and this idea is not really falsifiable in its most generic forms. Why do I still think it is useful? Because it is a new way of looking at physics, and because it makes everything so much more easy and intuitive to understand, and makes all the contradictions go away. I may not know the rules by which the graph needs to propagate in order for this to match up with experimental results, but I am pretty sure that someone more knowledgeable in math can figure them out. This is not a theory, but a new perspective under which to create theories.

Also, I would like to note that there are alternative interpretations for explaining relativity and quantum physics under this perspective. The ones mentioned above are just the ones that seem most intuitive to me. I recognize that having multiple ways to explain something is a bad thing for a theory, but since this is not a theory but a refreshing new perspective, I consider this a good thing.

I think that this approach has a lot of potential, but is difficult for humans to analyse because our brains evolved to deal with 3D structures very efficiently but are not at all optimised to handle arbitrary graph structures with any efficiency. For this reason, Coming up with an actual mathematically complete attempt at a graph-based model of physics would almost certainly require computer simulations for even simple problems.


Do you think the idea has merit?

If not, what are your objections?

Has research in something like this maybe already been done, and I just never heard of it?

Self-modification as a game theory problem

10 cousin_it 26 June 2017 08:47PM

In this post I'll try to show a surprising link between two research topics on LW: game-theoretic cooperation between AIs (quining, Loebian cooperation, modal combat, etc) and stable self-modification of AIs (tiling agents, Loebian obstacle, etc).

When you're trying to cooperate with another AI, you need to ensure that its action will fulfill your utility function. And when doing self-modification, you also need to ensure that the successor AI will fulfill your utility function. In both cases, naive utility maximization doesn't work, because you can't fully understand another agent that's as powerful and complex as you. That's a familiar difficulty in game theory, and in self-modification it's known as the Loebian obstacle (fully understandable successors become weaker and weaker).

In general, any AI will be faced with two kinds of situations. In "single player" situations, you're faced with a choice like eating chocolate or not, where you can figure out the outcome of each action. (Most situations covered by UDT are also "single player", involving identical copies of yourself.) Whereas in "multiplayer" situations your action gets combined with the actions of other agents to determine the outcome. Both cooperation and self-modification are "multiplayer" situations, and are hard for the same reason. When someone proposes a self-modification to you, you might as well evaluate it with the same code that you use for game theory contests.

If I'm right, then any good theory for cooperation between AIs will also double as a theory of stable self-modification for a single AI. That means neither problem can be much easier than the other, and in particular self-modification won't be a special case of utility maximization, as some people seem to hope. But on the plus side, we need to solve one problem instead of two, so creating FAI becomes a little bit easier.

The idea came to me while working on this mathy post on IAFF, which translates some game theory ideas into the self-modification world. For example, Loebian cooperation (from the game theory world) can be used to get around the Loebian obstacle (from the self-modification world) - two LW ideas with the same name that people didn't think to combine before!

Open thread, June 26 - July 2, 2017

1 Thomas 26 June 2017 06:12AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

A Call for More Policy Analysis

1 madhatter 25 June 2017 02:24PM

I would like to see more concrete discussion and analysis of AI policy in the EA community, and on this forum in particular.


 AI policy would broadly encompass all relevant actors meaningfully influencing the future and impact of AI, which would likely be governments, research labs and institutes, and international organizations.


Some initial thoughts and questions I have on this topic:


1)     How do we ensure all research groups with a likely chance of developing AGI know and care about the relevant work in AI safety (which hopefully is satisfactorily worked out by then)?


Some possibilities: trying to make AI safety a common feature of computer science curricula, general community building and more AI safety conferences, more popular culture conveying  non-terminatoresque illustrations of the risk.



2)     What strategies might be available for laggards in a race scenario to retard progress of leading groups, or to steal their research?

Some possibilities in no particular order: espionage, malware, financial or political pressures, power outages, surveillance of researchers.


3)     Will there be clear warning signs?


Not just in general AI progress, but locally near the leading lab. Observable changes in stock price, electricity output, etc.


4)     Openness or secrecy?

Thankfully the Future of Life Institute is working on this one. As I understand the consensus is that openness is advisable now, but secrecy may be necessary later. So what mechanisms are available to keep research private?


5)     How many players will there be with a significant chance of developing AGI? Which players?


6)     Is an arms race scenario likely?


7)     What is the most likely speed of takeoff?



8)     When and where will AGI be developed?


    Personally, I believe the use of forecasting tournaments to get a better sense of when and where AGI will arrive would be a very worthwhile use of our time and resources. After reading Superforecasting by Dan Gardner and Phillip Tetlock I was struck by how effective these tournaments are at singling out those with low Brier scores and using them to get a better-than-average predictions of future circumstances.



Perhaps the EA community could fund a forecasting tournament on the Good Judgment Project posing questions attempting to ascertain when AGI will be developed (I am guessing superforecasters will make more accurate predictions than AI experts on this topic), which research groups are the most likely candidates to be the developers of the first AGI, and other relevant questions. We would need to formulate the questions such that they are specific enough for use in the tournament. 

Effective Altruism : An idea repository

1 Onemorenickname 25 June 2017 12:56AM

Metainformations :

Personal Introduction

I came to define myself as a non-standard Effective Altruist. I’ve always been interested in Effective Altruism, way before I’ve even heard of EA. When I was younger, I simply thought I was altruist, and that what people did was … noise at best. Basically, naive ways to relieve one’s conscience and perpetuate one’s culture.

Since primary school I thought about global problems and solutions to these problems. So much so that the word “project” internally connotes “project solving some global problems”. As such, EA should have interested me.

However, it didn’t. The main reason was that I saw EA as some other charitists. I’ve always been skeptical toward charity, the reason being “They think too small” and “There are too much funding in standard solutions rather than in finding new ones”.

I think this exemplifies a problem about EA’s communication.

A Communication Problem

Most people I know got to know Effective Altruism through EffectiveAltruism.org.

Because of that website, these people see EA as a closed organization that help people to direct funds to better charities and find better careers.

That was my opinion of EA until I saw the grant offer : a closed organization with already defined solutions wouldn’t fund new ideas. As such, I changed my outlook of EA. I researched a bit more about it, and found an open and diverse community.

But I am busy person, therefore I have to use filters before putting more time in researching about something. I made my impression from :

What convinced me of that impression was the website’s content :

  • The tabs are “About, Blog, Donate, Effectively, Resources, Grants, Get Involved”. This looks like a standard showcase website of a closed organization with a call to donate.

  • The first four reading suggestions after the introduction are about charity and career choice. This leads people to thinking that EA is solely about that.

  • In the introduction, the three main questions are “Which cause/career/charity ?”.

I didn’t stop there, and I read more of that website, but it was along those same lines.

Counting me, my friends and people I met on LW and SSC, this directly led to losing 10-15 potential altruists in the community. Given that we were already interested in applying rationality to changing the world and my situation is not isolated (the aforementioned website is the first hit for “Effective Altruism” on Google), I do think that it is an important issue to EA.


Well, about the website :

  • Adding a tab “Open Ideas”/“Open projects”, “Forum” and/or “Communities”. The “Get Involved” is the only tab that offers (and only implicitly) some interaction. The new Involvement Guide is an action in the right direction.

  • Putting emphasis on the different communities and approaches. Digging, I’ve seen that there are several communities. However, the most prominent discriminating factor was the location. It would be nice to see a presentation of various approaches of EA, especially in the first resources new members get in touch with.

But more than changing the website, I think that lacking to EA is a platform dedicated to collective thinking about new ideas.

Projects don’t happen magically : people think, come to an idea, think more about that idea, criticize it, and if all goes well, maybe build a plan out of it, gather, and begin a project together. If we truly want new projects to emerge, having such a platform is of utmost importance.

The current forum doesn’t cut it : it isn’t meant to that end. It’s easier to build a forum dedicated to that than try to artificially support a balance between “New Ideas” posts and “Information Sharing” posts so that none of these get overshadowed. The same problem applies to existing reddit boards and facebook groups.

That platform should contain at least the following :

  • A place where new ideas are posted and criticized. A Reddit board, a Fecebook group, a forum.

  • A place where ideas are discussed interactively. An IRC channel, a web chat, a Discord server.

  • A place where ideas/projects are improved collectively and incrementally. A web pad, a Google doc, a Git repository.

  • A basic method to deal with new ideas / project collaboration. Some formatting, some questions that every idea should answer (What problem does it solve ?, How critical is it ?, What’s the solution variance ?), content deletion policy. A sticky-post on the forum, an other Google Doc.


  • Do you think such a platform would be useful ? Why ?

  • Would you be interested in building such a platform ? Either technically (by setting up the required tools), marketing-ly (by gathering people) or content-ly (by posting and criticizing ideas).

[Link] The Use and Abuse of Witchdoctors for Life

5 lifelonglearner 24 June 2017 08:59PM

Bi-Weekly Rational Feed

20 deluks917 24 June 2017 12:07AM

===Highly Recommended Articles:

Introducing The Ea Involvement Guide by The Center for Effective Altruism (EA forum) - A huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

Deep Reinforcement Learning from Human Preferences - An algorithm learns to backflip with 900 bits of feedback from the human evaluator. "One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better."

Build Baby Build by Bryan Caplan - Quote from a paper estimating the high costs of housing restrictions. We should blame the government, especially local government. The top alternate theory is wrong. Which regulations are doing the damage? It's complicated. Functionalists are wrong. State government is our best hope.

The Use And Abuse Of Witchdoctors For Life by Lou (sam[]zdat) - Anti-bullet magic and collective self-defense. Cultural evolution. People don't directly believe in anti-bullet magic, they believe in elders and witch doctors. Seeing like a State. Individual psychology is the foundation. Many psychologically important customs couldn't adapt to the marketplace.

S-risks: Why They Are The Worst Existential Risks by Kaj Sojata (lesswrong) - “S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.” Why we should focus on S-risk. Probability: Artificial sentience, Lack of communication, badly aligned Ai and competitive pressures. Tractability: Relationship with x-risk. Going meta, cooperation. Neglectedness: little attention, people conflate x-risk = s-risk.

Projects Id Like To See by William MacAskill (EA forum) - CEA is giving out £100K grants. General types of applications. EA outreach and Community, Anti-Debates, Prediction Tournaments, Shark Tank Discussions, Research Groups, Specific Skill Building, New Organizations, Writing.

The Battle For Psychology by Jacob Falkovich (Put A Number On It!) - An explanation of 'power' in statistics and why its always good. Low power means that positive results are mostly due to chance. Extremely bad incentives and research practices in psychology. Studying imaginary effects. Several good images.

Identifying Sources Of Cost Disease by Kurt Spindler - Where is the money going: Administration, Increased Utilization, Decreased Risk Tolerance. What market failures are in effect: Unbounded Domains, Signaling and Competitive Pressure (ex: military spending), R&D doesn't cut costs it creates new ways to spend money, individuals don't pay. Some practical strategies to reduce cost disease.


To Understand Polarization Understand The Extent Of Republican Failure by Scott Alexander - Conservative voters voted for “smaller government”, “fewer regulations”, and “less welfare state”. Their reps control most branches of the government. They got more of all three (probably thanks to cost disease).

Against Murderism by Scott Alexander - Three definitions of racism. Why 'Racism as motivation' fits best. The futility of blaming the murder rate in the USA on 'murderism'. Why its often best to focus on motivations other than racism.

Open Thread Comment by John Nerst (SSC) - Bi-weekly public open thread. I am linking to a very interesting comment. The author made a list of the most statistically over-represented words in the SSC comment section.

Some Unsong Guys by Scott Alexander (Scratchpad) - Pictures of Unsong Fan Art.

Silinks Is Golden by Scott Alexander - Standard SSC links post.

What is Depression Anyway: The Synapse Hypothesis - Six seemingly distinct treatments for depression. How at least six can be explained by considering synapse generation rates. Skepticism that this method can be used to explain anything since the body is so inter-connected. Six points that confuse Scott and deserve more research. Very technical.


Idea For Lesswrong Video Tutoring by adamzerner (lesswrong) - Community Video Tutoring. Sign up to either give or receive tutoring. Teaching others is a good way to learn and lots of people enjoy teaching. Hopefully enough people want to learn similar things. This could be a great community project and I recommend taking a look.

Regulatory Arbitrage For Medical Research What I Know So Far by Sarah Constantin (Otium) - Economics of avoiding the USA/FDA. Lots of research is already conducted in other countries. The USA is too large of a market not to sell to. Investors aren't interested in cheap preliminary trials. Other options: supplements, medical tourism, clinic ships, cryptocurrency.

Responses To Folk Ontologies by Ferocious Truth - Folk ontology: Concepts and categories held by ordinary people with regard to an idea. Especially pre-scientific or unreflective ones. Responses: Transform/Rescue, Deny or Restrict/Recognize. Rescuing free will and failing to rescue personal identity. Rejecting objective morality. Restricting personal identity and moral language. When to use each approach.

The Battle For Psychology by Jacob Falkovich (Put A Number On It!) - An explanation of 'power' in statistics and why its always good. Low power means that positive results are mostly due to chance. Extremely bad incentives and research practices in psychology. Studying imaginary effects. Several good images.

A Tangled Task Future by Robin Hanson - We need to untangle the economy to automate it. What tasks are heavily tangled and which are not. Ems and the human brain as a legacy system. Human brains are well-integrated and good at tangled tasks.

Epistemic Spot Check Update by Aceso Under Glass - Reviewing self-help books. Properties of a good self-help model: As simple as possible but not more so, explained well, testable on a reasonable timescale, seriously handles the fact the techniques might now work, useful. The author would appreciate feedback.

Skin In The Game by Elo (BearLamp) - Armchair activism and philosophy. Questions to ask yourself about your life. Actually do the five minute exercise at the end.

Momentum Reflectiveness Peace by Sarah Constantin (Otium) - Rationality requires a reflective mindset; a willingness to change course and consider how things could be very different. Momentum, keeping things as they are except more so, is the opposite of reflectivity. Cultivating reflectiveness: rest, contentment, considering ideas lightly and abstractly. “Turn — slowly.”

The Fallacy Fork Why Its Time To Get Rid Of by theFriendlyDoomer (r/SSC) - "The main thesis of our paper is that each and every fallacy in the traditional list runs afoul of the Fallacy Fork. Either you construe the fallacy in a clear-cut and deductive fashion, which means that your definition has normative bite, but also that you hardly find any instances in real life; or you relax your formal definition, making it defeasible and adding contextual qualifications, but then your definition loses its teeth. Your “fallacy” is no longer a fallacy."

Instrumental Rationality 1 Starting Advice by lifelonglerner (lesswrong) - "This is the first post in the Instrumental Rationality Sequence. This is a collection of four concepts that I think are central to instrumental rationality-caring about the obvious, looking for practical things, practicing in pieces, and realistic expectations."

Concrete Ways You Can Help Make The Community Better by deluks917 (lesswrong) - Write more comments on blog posts and non-controversial posts on lw and r/SSC. Especially consider commenting on posts you agree with. People are more likely to comment if other people are posting high quality comments. Projects: Gaming Server, aggregate tumblr effort-posts, improve lesswrong wiki, leadership in local rationalist group

Daring Greatly by Bayesian Investor - Fairly positive book review, some chapters were valuable and it was an easy read. How to overcome shame and how it differs from guilt. Perfectionism vs healthy striving. If you stop caring about what others think you lose your capacity for connection

A Call To Adventure by Robin Hanson - Meaning in life can be found by joining or starting a grand project. Two possible adventures: Promoting and implementing futarchy (decision making via prediction markets). Getting a real understanding of human motivation.

Thought Experiment Coarsegrained Vr Utopia by cousin_it (lesswrong) - Assume an AI is running a Vr simulation that is hooked up to actual human brains. This means that the AI only has to simulate nature at a coarse grained level. How hard would it be to make that virtual reality a utopia?

[The Rationalist-sphere and the Lesswrong Wiki]](http://lesswrong.com/r/discussion/lw/p4y/the_rationalistsphere_and_the_less_wrong_wiki/) - What's next for the Lesswrong wiki. A distillation of Lesswrong. Fully indexing the diaspora. A list of communities. Spreading rationalist ideas. Rationalist Research.

Deep Reinforcement Learning from Human Preferences - An algorithm learns to backflip with 900 bits of feedback from the human evaluator. "One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better."

Where Do Hypotheses Come From by c0rw1n (lesswrong) - Link to a 25 page article. "Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? In particular, why do humans make near-rational inferences in some natural domains where the candidate hypotheses are explicitly available, whereas tasks in similar domains requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes’ rule."

The Precept Of Universalism by H i v e w i r e d - "Universality, the idea that all humans experience life in roughly the same way. Do not put things or ideas above people. Honor and protect all peoples." Eight points expanding on how to put people first and honor everyone.

We Are The Athenians Not The Spartans by wubbles (lesswrong) - "Our values should be Athenian: individualistic, open, trusting, enamored of beauty. When we build social technology, it should not aim to cultivate values that stand against these. High trust, open, societies are the societies where human lives are most improved."


Updating My Risk Estimate of Geomagnetic Big One by Open Philosophy - Risk from magnetic storms caused by the sun. "I have raised my best estimate of the chance of a really big storm, like the storied one of 1859, from 0.33% to 0.70% per decade. And I have expanded my 95% confidence interval for this estimate from 0.0–4.0% to 0.0–11.6% per decade."

Links by GiveDirectly - Eight Media articles on Cash Transfers, Basic Income and Effective Altruism.

Are Givewells Top Charities The Best Option For Every Donor by The GiveWell Blog - Why GiveWell recommend charities are a good option for most donors. Which donors have better options: Donors with lots of time, high trust in a particular institution or values different from GiveWell's.

A New President of GWWC by Giving What We Can - Julia Wise is the New president of Giving What We Can.

Angst Ennui And Guilt In Effective Altruism by Gordon (Map and Territory) - Learning about existential risk can cause psychological harm. Guilt about being unable to help solve X-risk. Akrasia. Reasons to not be guilty: comparative advantage, ability is unequally distributed.

S-risks: Why They Are The Worst Existential Risks by Kaj Sojata (lesswrong) - “S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.” Why we should focus on S-risk. Probability: Artificial sentience, Lack of communication, badly aligned Ai and competitive pressures. Tractability: Relationship with x-risk. Going meta, cooperation. Neglectedness: little attention, people conflate x-risk = s-risk.

Update On Sepsis Donations Probably Unnecessary by Sarah Constantin (Otium) - Sarah C had asked people to crowdfund a sepsis RCT. The trial will probably get funded by charitable foundations. Diminishing returns. Finding good giving opportunities is hard and talking to people in the know is a good way to find things out.

What Is Valuable About Effective Altruism by Owen_Cotton-Barratt (EA forum) - Why should people join EA? The impersonal and personal perspectives. Tensions and synergies between the two perspectives. Bullet point conclusions for researchers, community leaders and normal members.

QALYs/$ Are More Intuitive Than $/QALYs by ThomasSittler (EA forum) - QALYs/$ are preferable to $/QALYs. visual representations on graphs. Avoiding Small numbers and re-normalizing to QUALs/10K$.

Introducing The Ea Involvement Guide by The Center for Effective Altruism (EA forum) - A huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

Cash is King by GiveDirectly - Eight media articles about Effective Altruism and Cash transfers.

Separating GiveWell and the Open Philanthropy Project by The GiveWell Blog - The GiveWell perspective. Context for the sale. Effect on donors who rely on GiveWell. Organization changes at GiveWell. Steps taken to sell Open Phil assets. The new relationship between GiveWell and Open Phil.

Open Philanthropy Project is Now an Independent Organization by Open Philosophy - The evolution of Open Phil. Why should Open Phil split from GiveWell. LLC structure.

Projects Id Like To See by William MacAskill (EA forum) - CEA is giving out £100K grants. General types of applications. EA outreach and Community, Anti-Debates, Prediction Tournaments, Shark Tank Discussions, Research Groups, Specific Skill Building, New Organizations, Writing.

===Politics and Economics:

No Us School Funding Is Actually Somewhat Progressive by Random Critical Analysis - Many people think that wealthy public school districts spend more per pupil. This information is outdated. Within most states spending is higher on disadvantaged students. This is despite the fact that school funding is mostly local. Extremely thorough with loads of graphs.

Build Baby Build by Bryan Caplan - Quote from a paper estimating the high costs of housing restrictions. We should blame the government, especially local government. The top alternate theory is wrong. Which regulations are doing the damage? It's complicated. Functionalists are wrong. State government is our best hope.

Identifying Sources Of Cost Disease by Kurt Spindler - Where is the money going: Administration, Increased Utilization, Decreased Risk Tolerance. What market failures are in effect: Unbounded Domains, Signaling and Competitive Pressure (ex: military spending), R&D doesn't cut costs it creates new ways to spend money, individuals don't pay. Some practical strategies to reduce cost disease.

The Use And Abuse Of Witchdoctors For Life by Lou (sam[]zdat) - Anti-bullet magic and collective self-defense. Cultural evolution. People don't directly believe in anti-bullet magic, they believe in elders and witch doctors. Seeing like a State. Individual psychology is the foundation. Many psychologically important customs couldn't adapt to the marketplace.

Greece Gdp Forecasting by João Eira (Lettuce be Cereal) - Transforming the Data. Evaluating the Model with Exponential Smoothing, Bagged ETS and ARIMA. The regression results and forecast.

Links 9 by Artir (Nintil) - Economics, Psychology, Artificial Intelligence, Philosophy and other links.

Amazon Buying Whole Foods by Tyler Cowen - Quotes from Matt Yglesias, Alex Tabarrock, Ross Douthat and Tyler. “Dow opens down 10 points. Amazon jumps 3% after deal to buy Whole Foods. Walmart slumps 7%, Kroger plunges 16%”

Historical Returns Market Portfolio by Tyler Cowen - From 1960 to 2015 the global market portfolio realized a compounded real return of 4.38% with a std of 11.6%. Investors beat savers by 3.24%. Link to the original paper.

Trust And Diver by Bryan Caplan - Robert Putnam's work is often cited as showing the costs of diversity. However Putnam's work shows the negative effect of diversity on trust is rather modest. On the other hand Putnam found multiple variables that are much more correlated with trust (such as home ownership).

Why Optimism is More Rational than Pessimism by TheMoneyIllusion - Splitting 1900-2017 into Good and Bad periods. We learn something from our mistakes. Huge areas where things have improved long term. Top 25 movies of the 21st Century. Artforms in decline.

Is Economics Science by Noah Smith - No one knows what a Science is. Thoeries that work (4 examples). The empirical and credibility revolutions. Why we still need structural models. Ways economics could be more scientific. Data needs to kill bad theories. Slides from Noah's talk are included and worth playing but assume familiarity with the economics profession.


Clojure Concurrency And Blocking With Coreasync by Eli Bendersky - Concurrent applications and blocking operations using core.async. Most of the article compares threads and go-blocks. Lots of code and well presented test results.

Optopt by Ben Kuhn - Startup options are surprisingly valuable once you factor in that you can quit of the startup does badly. A mathematical model of the value of startup options and the optimal time to quit. The ability to quit rose the option value by over 50%. The sensitivity of the analysis with respect to parameters (opportunity cost, volatility, etc).

Epistemic Spot Check: The Demon Under The Microscope by Aceso Under Glass - Biography of the man who invented sulfa drugs, the early anti-bacteria treatments which were replaced by penicillin. Interesting fact checks of various claims.

Sequential Conversion Rates by Chris Stucchio - Estimating success rates when you have noisy reporting. The article is a sketch of how the author handled such a problem in practice.

Set Theory Problem by protokol2020 - Bring down ZFC. Aleph-zero spheres and Aleph-one circles.

Connectome Specific Harmonic Waves On Lsd by Qualia Computing - Transcript and video of a talk on neuroimaging the brain on LSD. "Today thanks to the recent developments in structural neuroimaging techniques such as diffusion tensor imaging, we can trace the long-distance white matter connections in the brain. These long-distance white matter fibers (as you see in the image) connect distant parts of the brain, distant parts of the cortex."

Approval Maximizing Representations by Paul Christiano - Representing images. Manipulation representations. Iterative and compound encodings. Compressed representations. Putting it all together and bootstrapping reinforcement learning.

Travel by Ben Kuhn - Advice for traveling frequently. Sleeping on the plane and taking redeyes. Be robust. Bring extra clothes, medicine, backup chargers and things to read when delayed. Minimize stress. Buy good luggage and travel bags.

Learning To Cooperate, Compete And Communicate by Open Ai - Competitive multi-agent models are a step towards AGI. An algorithm for centralized learning and decentralized execution in multi-agent environment. Initial Research. Next Steps. Lots of visuals demonstrating the algorithm in practice.

Openai Baselines Dqn by Open Ai - "We’re open-sourcing OpenAI Baselines, our internal effort to reproduce reinforcement learning algorithms with performance on par with published results." Best practices we use for correct RL algorithm implementations. First release: DQN and three of its variants, algorithms developed by DeepMind.

Corrigibility by Paul Christiano - Paul defines the sort of AI he wants to build, he refers to such systems as "corrigible". Paul argues that a sufficiently corrigible agent will become more corrigible over time. This implies that friendly AI is not a narrow target but a broad basin of attraction. Corrigible agents prefer to build other agents that share the overseers preferences, not their own. Predicting that the overseer wants me to turn off when he hits the off-button is not complicated relative to being deceitful. Comparison with Eliezer's views.

G Reliant Skills Seem Most Susceptible To Automation by Freddie deBoer - Computers already outperform humans in g-loaded domains such as Go and Chess. Many g-loaded jobs might get automated. Jobs involving soft or people skills are resilient to automation.

Persona 5: Spoiler Free Review - Persona games are long but deeply worthwhile if you enjoy the gameplay and the story. Persona 5 is much more polished but Persona 3 has a more meaningful story and more interesting decisions. Tips for Maximum Enjoyment of Persona 5. Very few spoilers.

Sea Problem by protokol2020 - A fun problem. Measuring sea level rise.


83 The Politics Of Emergency by Waking Up with Sam Harris - Fareed Zakaria. "His career as a journalist, Samuel Huntington's "clash of civilizations," political partisanship, Trump, the health of the news media, the connection between Islam and intolerance"

On Risk, Statistics, And Improving The Public Understanding Of Science by 80,000 Hours - A lifetime of communicating science. Early career advice. Getting people to intuitively understand hazards and their effect on life expectancy.

Ed Luce by Tyler Cowen - The Retreat of Western Liberalism "What a future liberalism will look like, to what extent current populism is an Anglo-American phenomenon, Modi’s India, whether Kubrick, Hitchcock, and John Lennon are overrated or underrated, and what it is like to be a speechwriter for Larry Summers."

Thomas Ricks by EconTalk - Thomas Ricks book Churchill and Orwell. Overlapping lives and the fight to preserve individual liberty.

The End Of The World According To Isis by Waking Up with Sam Harris - Graeme Wood. His experience reporting on ISIS, the myth of online recruitment, the theology of ISIS, the quality of their propaganda, the most important American recruit to the organization, the roles of Jesus and the Anti-Christ in Islamic prophecy, free speech and the ongoing threat of jihadism.

Jason Khalipa by Tim Ferriss - "8-time CrossFit Games competitor, a 3-time Team USA CrossFit member, and — among other athletic feats — he has deadlifted 550 pounds, squatted 450 pounds, and performed 64 pullups at a bodyweight of 210 pounds."

Dario Amodei, Paul Christiano & Alex Ray. - 80K hours released a detailed guide to careers in AI policy. " We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far." Transcript included.

Don Bourdreaux Emergent Order by EconTalk - "Why is it that people in large cities like Paris or New York City people sleep peacefully, unworried about whether there will be enough bread or other necessities available for purchase the next morning? No one is in charge--no bread czar. No flour czar."

Tania Lombrozo On Why We Evolved The Urge To Explain by Rational Speaking - "Research on what purpose explanation serves -- i.e., why it helps us more than our brains just running prediction algorithms. Tania and Julia also discuss whether simple explanations are more likely to be true, and why we're drawn to teleological explanations"

Idea for LessWrong: Video Tutoring

10 adamzerner 23 June 2017 09:40PM

Idea: we coordinate to teach each other things via video chat.

  • We (mostly) all like learning. Whether it be for fun, curiosity, a stepping stone towards our goals.
  • My intuition is that there's a lot of us who also enjoy teaching. I do, personally.
  • Enjoyment aside, teaching is a good way of solidifying ones knowledge.
  • Perhaps there would be positive unintended consequences. Eg. socially.
  • Why video? a) I assume that medium is better for education than simply text. b) Social and motivational benefits, maybe. A downside to video is that some may find it intimidating.
  • It may be nice to evolve this into a group project where we iteratively figure out how to do a really good job teaching certain topics.
  • I see the main value in personalization, as opposed to passive lectures/seminars. Those already exist, and are plentiful for most topics. What isn't easily accessible is personalization. With that said, I figure it'd make sense to have about 5 learners per teacher.

So, this seems like something that would be mutually beneficial. To get started, we'd need:

  1. A place to do this. No problem: there's Hangouts, Skype, https://talky.io/, etc.
  2. To coordinate topics and times.

Personally, I'm not sure how much I can offer as far as doing the teaching. I worked as a web developer for 1.5 years and have been teaching myself computer science. I could be helpful to those unfamiliar with those fields, but probably not too much help for those already in the field and looking to grow. But I'm interested in learning about lots of things!

Perhaps a good place to start would be to record in some spreadsheet, a) people who want to teach, b) what topics, and c) who is interested in being a Learner. Getting more specific about who wants to learn what may be overkill, as we all seem to have roughly similar interests. Or maybe it isn't.

If you're interested in being a Learner or a Teacher, please add yourself to this spreadsheet.

[Link] The evolution of superstitious and superstition-like behaviour

3 c0rw1n 23 June 2017 04:14PM

[Link] Putanumonit: What statistical power means, and why I'm terrified about psychology

11 Jacobian 21 June 2017 06:29PM

[Link] Three Responses to Incorrect Folk Ontologies

8 J_Thomas_Moros 21 June 2017 02:26PM

Priors Are Useless

1 DragonGod 21 June 2017 11:42AM


This post contains Latex. Please install Tex the World for Chromium or other similar Tex typesetting extensions to view this post properly.

Priors are Useless.

Priors are irrelevant. Given two different prior probabilities [;Pr_{i_1};], and [;Pr_{i_2};] for some hypothesis [;H_i;].
Let their respective posterior probabilities be [;Pr_{i_{z1}};] and [;Pr_{i_{z2};].
After sufficient number of experiments, the posterior probability [;Pr_{i_{z1}} \approx [;Pr_{i_{z2};].
Or More formally:
[;\lim_{n \to \infty} \frac{ Pr_{i_{z1}}}{Pr_{i_{z2}}} = 1 ;].
Where [;n;] is the number of experiments.
Therefore, priors are useless.
The above is true, because as we carry out subsequent experiments, the posterior probability [;Pr_{i_{z1_j}};] gets closer and closer to the true probability of the hypothesis [;Pr_i;]. The same holds true for [;Pr_{i_{z2_j}};]. As such, if you have access to a sufficient number of experiments the initial prior hypothesis you assigned the experiment is irrelevant.
To demonstrate.
This is the graph of the above table:
In the example above, the true probability of Hypothesis [;H_i;] [;(P_i);] is [;0.5;] and as we see, after sufficient number of trials, the different [;Pr_{i_{z1_j}};]s get closer to [;0.5;].
To generalize from my above argument:

If you have enough information, your initial beliefs are irrelevant—you will arrive at the same final beliefs.
Because I can’t resist, a corollary to Aumann’s agreement theorem.
Given sufficient information, two rationalists will always arrive at the same final beliefs irrespective of their initial beliefs.

The above can be generalized to what I call the “Universal Agreement Theorem”:

Given sufficient evidence, all rationalists will arrive at the same set of beliefs regarding a phenomenon irrespective of their initial set of beliefs regarding said phenomenon.


Exercise For the Reader

Prove [;\lim_{n \to \infty} \frac{ Pr_{i_{z1}}}{Pr_{i_{z2}}} = 1 ;].

[Link] S-risks: Why they are the worst existential risks, and how to prevent them

16 Kaj_Sotala 20 June 2017 12:34PM

[Link] Angst, Ennui, and Guilt in Effective Altruism

2 gworley 19 June 2017 08:38PM

Open thread, June. 19 - June. 25, 2017

1 Elo 18 June 2017 11:10PM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Instrumental Rationality 1: Starting Advice

13 lifelonglearner 18 June 2017 06:43PM

Starting Advice

[This is the first post in the Instrumental Rationality Sequence. It's a collection of four concepts that I think are central to instrumental rationality—caring about the obvious, looking for practical things, practicing in pieces, and realistic expectations.

Note that these essays are derivative of things I've written here before, so there may not be much new content in this post. (But I wanted to get something out as it'd been about a month since my last update.)

My main goal with this collection was to polish / crystallize past points I've made. If things here are worded poorly, unclear, or don't seem useful, I'd really appreciate feedback to try and improve.]


In Defense of the Obvious:

[As advertised.]

A lot of the things I’m going to go over in this sequence are sometimes going to sound obvious, boring, redundant, or downright tautological. This essay is here to convince you that you should try to listen to the advice anyway, even if it sounds stupidly obvious.

First off, our brains don’t always see all the connections at once. Thus, even if some given advice is apparentlyobvious, you still might be learning things.

For example, say someone tells you, “If you want to exercise more, then you should probably exercise more. Once you do that, you’ll become the type of person who exercises more, and then you’ll likely exercise more.”

The above advice might sound pretty silly, but it may still be useful. Often, our mental categories for “exercise” and “personal identity” are in different places. Sure, it’s tautologically true that someone who exercises becomes a person who exercises more. But if you’re not explicitly thinking about how your actions change who you are, then there’s likely still something new to think about.

Humans are often weirdly inconsistent with our mental buckets—things that logically seem like they “should” be lumped together often aren't. By paying attention to even tautological advice like this, you’re able to form new connections in your brain and link new mental categories together, perhaps discovering new insights that you “already knew”.

Secondly, obvious advice tends to be low-hanging fruit. If your brain is pattern-matching something as “boring advice” or “obvious”, you’ve likely heard it before many times before.

For example, you can probably guess the top 5 things on any “How to be Productive” list—make a schedule, remove distractions, take periodic breaks, etc. etc. You can almost feel your brain roll its metaphorical eyes at such dreary, well-worn advice.

But if you’ve heard these things repeated many times before, this is also good reason to suspect that, at least for a lot of people, it works. Meaning that if you aren’t taking such advice already, you can probably get a boost by doing so.

If you just did those top 5 things, you’d probably already be quite the productive person.

The trick, then, is to actually do them. That means doing the obvious thing.


Lastly, it can be easy to discount obvious advice when you’ve seen too much of it. When you’re bombarded with boring-seeming advice from all angles, it’s easy to become desensitized.

What I mean is that it’s possible to dismiss obvious advice outright because it sounds way too simple. “This can’t possibly work,” your brain might say, “The secret to getting things done must be more complex!”

There’s something akin to the hedonic treadmill happening here where, after having been exposed to all the “normal” advice, you start to seek out deeper and deeper ideas in search of some sort of mental high. What happens is that you become a kind of self-help junkie.

You can end up craving the bleeding edge of crazy ideas because literally nothing else seems worthwhile. You might end up dismissing normal helpful ideas simply because they’re not paradigm-crushing, mind-blowing, or mentally stimulating enough.

At which point, you’ve adopted quite the contrarian stance—you reject the typical idea of advice on grounds of its obviousness alone.

If this describes, might I tempt you with the meta-contrarian point of view?

Here’s the sell: One of the secrets to winning at life is looking at obvious advice, acknowledging that it’s obvious, and then doing it anyway.

(That’s right, you can join the elite group of people who scoff at those who scoff at the obvious!)

You can both say, “Hey, this is pretty simple stuff I’ve heard a thousand times before,” as well as say, “Hey, this is pretty useful stuff I should shut up and do anyway even if it sounds simple because I’m smart and I recognize the value here.”

At some point, being more sophisticated than the sophisticates means being able the grasp the idea that not all things have to be hyper complex. Oftentimes, the trick to getting something done is simply to get started and start doing it.

Because some things in life really are obvious.

Hunting for Practicality:

[This is about looking for ways to have any advice you read be actually useful, by having it apply to the real world. ]

Imagine someone trying to explain exactly what the mitochondria does in the cell, and contrast that to someone trying to score a point in a game of basketball.

There’s something clearly different about what each person is trying to do, even if we lumped both under the label of “learning” (one is learning about cells and the other is learning about basketball).

In learning, it turns out this  divide is often separated into declarative and procedural knowledge.

Declarative knowledge is like the student trying to puzzle out the ATP question; it’s about what you know.

In contrast, procedural knowledge, like the fledgling basketball player, is about what you do.

I bring up this divide because many of the techniques in instrumental rationality will feel like declarative knowledge, but they’ll really be procedural in nature.

For example, say you’re reading something on motivation, and you learn that “Motivation = Energy to do the thing + a Reminder to do the thing + Time to do the thing = E+R+T”.

What’ll likely happen is that your brain will form a new set of mental nodes that connects “motivation” to “E+R+T”. This would be great if I ended up quizzing you “What does motivation equal?” whereupon you’d correctly answer “E+R+T”.

But that’s not the point here! The point is to have the equation actually cash out into the real world and positively affect your actions. If information isn’t changing you view or act, then you’re probably not extracting all the value you can.

What that means is figuring out the answer to this question: "How do I see myself acting differently in the future as a result of this question?"

With that in mind, say you generate some examples and make a list.

Your list of real-world actions might end up looking like:

1) Remembering to stay hydrated more often (Energy)

2) Using more Post-It notes as memos (Reminder)

3) Start using Google Calendar to block out chunks of time (Time).

The point is to be always on the lookout for ways to see how you can use what you’re learning to inform your actions. Learning about all these things is only useful if you can find ways to apply them. You want to do more than have empty boxes that link concepts together. It’s important to have those boxes linked up to ways you can do better in the real world.

You want to actually put in some effort trying to answer question of practicality.




Actually Practicing:

[This is about knowing the nuances of little steps behind any sort of self-improvement skill you learn, and how those little steps are important when learning the whole.]

So on one level, using knowledge from instrumental rationality is about how you take declarative-seeming information and find ways to actually get real-world actions out of it. That’s important.

But it’s also important to note that the very skill of “Generating Examples”—the thing you did in the above essay to even figure out which actions can fit in the above equation to fill in the blanks of E, R, and T—is itself a mental habit that requires procedural knowledge.

What I mean is that there’s a subtler thing that’s happening inside your head when you try to come up with examples—your brain is doing something—and this “something” is important.  

It’s important, I claim, because if we peer a little more deeply at what it means for your brain to generate examples, we’ll come away with a list of steps that will feel a lot like something a brain can do, a prime example of procedural knowledge.

For example, we can imagine a magician trying to learn a card trick. They go through the steps. First they need to spread the cards. Then comes the secret move. Finally comes the final reveal of the selected card in the magician’s pocket.

What the audience member sees is the full finished product. And indeed, the magician who’s practiced enough will also see the same thing. But it’s not until the magician goes through all the steps and understands how all the steps flow together to form the whole card trick that they’re ready to perform.

The idea here is to describe any mental skill with enough granularity and detail, at the 5 second level, such that you’d both be able to go through the same steps a second time and teach someone else. So being able to take skills and chunk them into smaller pieces is also forms another core part of learning.


Realistic Expectations:

[An essay about having realistic expectations and looking past potentially harmful framing effects.]

There’s this tendency to get frustrated with learning mental techniques after just a few days. I think this is because people miss the declarative vs procedural distinction. (But you hopefully won’t fall prey to it because we’ve covered the distinction now!)

Once we liken the analogy to be more like that playing a sport, it becomes much easier to see that any expectation of immediately learning a mental habit is rather silly—no one expects to master tennis in just a week.

So, when it comes to trying to configure your expectations, I suggest that you try to renormalize your expectations by treating learning mental habits more like learning a sport.

Keep that as an analogy, and you’ll likely get fairly well-calibrated expectations for learning all this stuff.

Still, what, then, might be a realistic time frame for learning?

We’ll go over habits in far more detail in a later section, but a rough number for now is approximately two months. You can expect that, on average, it’ll take you about 66 days to ingrain a new habit.

Similarly, instrumental rationality (probably) won’t make you a god. In my experience, studying these areas has been super useful, which is why I’m writing at all. But I would guess that, optimistically, I only about doubled my work output.

Of course your own mileage may vary depending where you are right now, but this serves as the general disclaimer to keep your expectations within the bound of reality.

Here, the main point is that, even though mental habits don’t seem like they should be more similar to playing a sport, they really are. There’s something here about how first impressions can be rather deceiving.

For example, a typical trap I might fall into is missing the distinction between “theoretically possible” and “realistic”. I end up looking at the supposed 24 hours available to me everyday and then beating myself up for not being able to harness all 24 hours to do productive work.

But such a framing of the situation is inaccurate; things like sleep and eating are often very essential to maximizing productivity for the rest of the hours! So when diving in and practicing, try to look a little deeper when setting your expectations.


Concrete Ways You Can Help Make the Community Better

21 deluks917 17 June 2017 03:03AM

There is a TLDR at the bottom

Lots of people really value the lesswrong community but aren't sure how to contribute. The rationalist community can be intimidating. We have a lot of very smart people and the standards can be high. Nonetheless there are lots of concrete ways a normal rationalist can help improve the community. I will focus on two areas - engaging with content and a list of shovel ready projects you can get involved in. I will also briefly mention some more speculative ideas at the end of the post.

1) Engaging with Content:

I have spoken to many people I consider great content creators (ex: Zvi, Putanumonit, tristanm). It’s very common to wish their articles got more comments and engagement. The easiest thing you can do is make a lesswrong account and use the upvote button. Seeing upvotes really does motivate good writers. This only works for lesswrong/reddit but it makes a difference. I can think of several lw articles with less upvotes than people who have personally told me the article was great (ex: norm-one-principle by tristanm [1]).

Good comments tend to be even more appreciated than upvotes, and comments can be left on blog posts. If a post has few comments, then almost any decent quality comment is likely to be appreciated by the author. If you have a question or concern, just ask. Many great authors read all their comments, at least those left in the first few days, and often respond to them. Lots of readers comment very rarely, if at all. 95.1% of people who took the SSC survey comment less than once a month and 73.6% never comment at all [2]. The survey showed that survey takers were a highly engaged group who had read lots of posts. If a blog has very few comments I think you should update heavily towards “it’s a good idea for me to post my comment”.

However, what is most lacking in the rational-sphere is positive engagement with non-controversial content you enjoyed.  Recently the SSC sub-reddit found that about 95% of recent content was either in the culture-war thread or contained in a few threads the community considered low quality (based on vote counts) [3]. You can see a similar effect on lesswrong by considering the Dragon Army post [4]. Most good articles posted recently to lesswrong get around 10 comments or less. The Dragon Army post got over 550. I am explicitly not asking people to avoid posting in controversial threads; doing so would be asking a lot of people. But “engagement” is an important reward mechanism for content creators. I do think we should reward more of the writers we find valuable by responding to them with positive engagement.

It’s often difficult to write a comment on a post that you agree with that isn't just “+1 nice post.” Here are some strategies I have found useful:

- If the post is somewhat theoretical try to apply it in a concrete case. Talk about what difficulties you run into and what seems to work well.

- Talk about how the ideas in the post have helped you personally. For example you can say that never understood concept X until you read the post.

- Connect the post to other articles or essays. It’s usually not optimal to just post a link. Either summarize the other article or include a relevant, possibly extended, quote. Reading articles takes time.

- Speculate a little on how the ideas in the article could be extended further.

It’s not just article writers who enjoy people engaging with their work. People who write comments also appreciate getting good responses. Posting high quality comments, including responses to other comments, encourages other people to engage more. You can personally help get a virtuous cycle going. As a side note I am unsure about the relative values of posting a comment directly on a blog vs reposting the blogpost to lesswrong and commenting there. Currently lesswrong is not that inundated with reposts but it could get more crowded in the future. In addition, I think article authors are less likely to read lesswrong comments about their post, but I am not confident in the effect size.

2) Shovel Ready Projects:

-- Set up an online Lesswrong gaming group/server, ideally for a popular game. I have talked to people and Overwatch seems to have a lot of interest. People seemed to think it would really be a blast to play Overwatch with four other rationalists. Another popular idea is Dungeons and Dragons. I am not a gaming expert and lots of games could probably work but I wanted to share the feedback I got. Notably there is already a factorio server [5].

-- Help 'aggregate' a best of rationalist_tumblr effort posts. Rat_Tumblr is very big and hard to follow. Effort posts are mixed in with lots of random things. One could also include the best responses. There is no need to do this on a daily basis. You could just have a blog that only reblogs high-quality effort posts. I would personally follow this blog and would be willing to cooperate in whatever ways I could. I also think this blog would bring some "equality" to rat_Tumb. The structure of tumblr implies that it’s very hard to get readers unless a popular blog interacts with you. People report getting a "year’s worth of activity in a day" when someone like Eliezer or Ozy signal boosts them. An aggregator would be a useful way for less well known blogs to get attention.

-- Help the lesswrong wiki. Currently a decent fraction of lw-wiki posts are fairly out of date. In general the wiki could be doing some exciting thing such as: a distillation of Lesswrong. Fully indexing the diaspora. A list of communities. Spreading rationalist ideas. Rationalist Research. There is currently a project to modernize the wiki [6]. Even if you don't get involved in the more ambitious parts of the wiki you could re-write an article. Re-writing an article doesn't require much commitment and would provide a concrete benefit to the community. The wiki is prominently linked and the community would get a lot of good PR from a polished wiki.

-- Get involved with effective altruism. The Center for Effective Altruism recent posted a very high quality involvement guide [7]. It’s a huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

-- Get more involved in your local irl rationalist group. Many group leaders (ex: Vanier) have suggested that it can be very hard to get members to lead things. If you are interested in leadership and have a decent reputation your local community might need your help.

I would be very interested in comments suggesting other projects/activities rationalists can get involved with.

3) Conclusion 

As a brief aside I want to mention that I considered writing about outreach. But I don't have tons of experience at outreach and I couldn't really process the data on effective outreach. The subject seems quite complicated. Perhaps someone else has already worked through the evidence. I will however recommend this old article by Paul Christiano (now at open AI) [8]. Notably the camp discussed in this pos did come eventually come into being. It’s not a comprehensive article but it has some good ideas. This guide to “How to Run a Successful Less Wrong Meetup” [9] is extremely polished and has some interesting material related to outreach and attracting new members.

It’s easy to think your actions can't make a difference in the community, but they can. A surprisingly large number of people see comments on lesswrong or r/SSC. Good comments are highly appreciated. The person you befriend and convince to stick around on lesswrong might be the next Scott Alexander. Unfortunately, a lot of the time gratitude and appreciation never gets expressed; I am personally very guilty on this metric. But we are all in this together and this article only covers a small sample of the ways you can help make the community better.

If you have feedback or want any advice/help and don't want to post in public I would be super happy to get your private messages.


- Write more comments on blog posts and non-controversial posts on lw and r/SSC

- Especially consider commenting on posts you agree with

- People are more likely to comment if other people are posting high quality comments.

- Projects: Gaming Server, aggregate tumblr effort-posts, improve lesswrong wiki, leadership local rationalist group

5) References: 

[1] http://lesswrong.com/r/discussion/lw/p3f/mode_collapse_and_the_norm_one_principle/

[2] http://slatestarcodex.com/2017/03/17/ssc-survey-2017-results/

[3] https://www.reddit.com/r/slatestarcodex/comments/6gc7k8/what_can_be_done_to_make_the_culture_war_thread/

[4] http://lesswrong.com/lw/p23/dragon_army_theory_charter_30min_read/

[5] factorio.cypren.net:34197 . Modpack: http://factorio.cypren.net/files/current-modpack.zip

[6] http://lesswrong.com/r/discussion/lw/p4y/the_rationalistsphere_and_the_less_wrong_wiki/

[7] https://www.effectivealtruism.org/get-involved/

[8] http://lesswrong.com/lw/4v5/effective_rationality_outreach/

[9] http://lesswrong.com/lw/crs/how_to_run_a_successful_less_wrong_meetup/

Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere)

10 toonalfrink 15 June 2017 06:55PM

AI safety is a small field. It has only about 50 researchers, and it’s mostly talent-constrained. I believe this number should be drastically higher.

A: the missing step from zero to hero

I have spoken to many intelligent, self-motivated people that bear a sense of urgency about AI. They are willing to switch careers to doing research, but they are unable to get there. This is understandable: the path up to research-level understanding is lonely, arduous, long, and uncertain. It is like a pilgrimage.

One has to study concepts from the papers in which they first appeared. This is not easy. Such papers are undistilled. Unless one is lucky, there is no one to provide guidance and answer questions. Then should one come out on top, there is no guarantee that the quality of their work will be sufficient for a paycheck or a useful contribution.

Unless one is particularly risk-tolerant or has a perfect safety net, they will not be able to fully take the plunge.
I believe plenty of measures can be made to make getting into AI safety more like an "It's a small world"-ride:

  • Let there be a tested path with signposts along the way to make progress clear and measurable.

  • Let there be social reinforcement so that we are not hindered but helped by our instinct for conformity.

  • Let there be high-quality explanations of the material to speed up and ease the learning process, so that it is cheap.

B: the giant unrelenting research machine that we don’t use

The majority of researchers nowadays build their careers through academia. The typical story is for an academic to become acquainted with various topics during their study, pick one that is particularly interesting, and work on it for the rest of their career.

I have learned through personal experience that AI safety can be very interesting, and the reason it isn’t so popular yet is all about lack of exposure. If students were to be acquainted with the field early on, I believe a sizable amount of them would end up working in it (though this is an assumption that should be checked).

AI safety is in an innovator phase. Innovators are highly risk-tolerant and have a large amount of agency, which allows them to survive an environment with little guidance, polish or supporting infrastructure. Let us not fall for the typical mind fallacy, expecting less risk-tolerant people to move into AI safety all by themselves. Academia can provide that supporting infrastructure that they need.

AASAA adresses both of these issues. It has 2 phases:

A: Distill the field of AI safety into a high-quality MOOC: “Introduction to AI safety”

B: Use the MOOC as a proof of concept to convince universities to teach the field




We are bottlenecked for volunteers and ideas. If you'd like to help out, even if just by sharing perspective, fill in this form and I will invite you to the slack and get you involved.

Thought experiment: coarse-grained VR utopia

15 cousin_it 14 June 2017 08:03AM

I think I've come up with a fun thought experiment about friendly AI. It's pretty obvious in retrospect, but I haven't seen it posted before. 

When thinking about what friendly AI should do, one big source of difficulty is that the inputs are supposed to be human intuitions, based on our coarse-grained and confused world models. While the AI's actions are supposed to be fine-grained actions based on the true nature of the universe, which can turn out very weird. That leads to a messy problem of translating preferences from one domain to another, which crops up everywhere in FAI thinking, Wei's comment and Eliezer's writeup are good places to start.

What I just realized is that you can handwave the problem away, by imagining a universe whose true nature agrees with human intuitions by fiat. Think of it as a coarse-grained virtual reality where everything is built from polygons and textures instead of atoms, and all interactions between objects are explicitly coded. It would contain player avatars, controlled by ordinary human brains sitting outside the simulation (so the simulation doesn't even need to support thought).

The FAI-relevant question is: How hard is it to describe a coarse-grained VR utopia that you would agree to live in?

If describing such a utopia is feasible at all, it involves thinking about only human-scale experiences, not physics or tech. So in theory we could hand it off to human philosophers or some other human-based procedure, thus dealing with "complexity of value" without much risk. Then we could launch a powerful AI aimed at rebuilding reality to match it (more concretely, making the world's conscious experiences match a specific coarse-grained VR utopia, without any extra hidden suffering). That's still a very hard task, because it requires solving decision theory and the problem of consciousness, but it seems more manageable than solving friendliness completely. The resulting world would be suboptimal in many ways, e.g. it wouldn't have much room for science or self-modification, but it might be enough to avert AI disaster (!)

I'm not proposing this as a plan for FAI, because we can probably come up with something better. But what do you think of it as a thought experiment? Is it a useful way to split up the problem, separating the complexity of human values from the complexity of non-human nature?

Lloyd's of London and less-than-catastrophic risk

2 fortyeridania 14 June 2017 02:49AM

I recently found that Lloyd's has a number of interesting resources on risk. One is the City Risk Index, the methodology for which comes from Cambridge's Judge Business School.

The key metric is something they call GDP@Risk. Despite the name, it is not simply an application of Value@Risk to GDP. Instead, it is simply the sum of the expected damage from a given threat (or from a set of threats) during a given time period. In this case, the time period is 2015-2015. The threats considered include manmade ones (e.g., cyber attack, oil price shock) and natural ones (e.g., drought, solar storm). The site includes brief case studies for the threats. For example, the "plant epidemic" study focuses on the demise of the Gros Michel banana:

Event: Panama disease outbreak, 1950s

Location: Latin America

Economic cost: Estimated losses across Latin America were around $400m ($2.3bn today) although this figure does not include any of the economic losses caused by unemployment, abandoned villages and unrealised income in the affected region.

Description: The Fusarium oxysporum cubense fungus was first diagnosed in Panama but quickly travelled across Central America.

Damage: The disease wiped out the Gros Michel banana, the principal cultivar at the time, from plantations across the region. Between 1940 and 1960, around 30,000 hectares of Gros Michel plantations were lost in the Ulua Valley of Honduras, and in a decade 10,000 hectares were lost in Suriname and the Quepos area of Costa Rica.

Insight: Gros Michel was replaced in the 1960s by Cavendish, a variety thought to be resistant to the disease. However, a new strain of the pathogen was found to be attacking Cavendish plantations in Southeast Asia in the early 1990s. It has since spread, destroying tens of thousands of hectares across Indonesia and Malaysia, and costing more than $400m in the Philippines alone. There is concern that it could reach Central America and destroy up to 85% of the world’s banana crop. Solutions to contain the disease could include increasing genetic diversity among banana cultivars and developing hybrid varieties with stronger resistance.

As the name implies, the site quantifies risks from these threats at the city level. So which cities are the most at risk from a plant epidemic? They're all in APAC:

  • Hong Kong ($3.83b)
  • Shanghai ($2.89b)
  • Beijing ($2.38b)
  • Bangkok ($2.22b)
  • Jakarta ($2.09b)

These account for 1/6 of the plant-epidemic risk across all 301 cities ($75b).

Which cities are the most "at risk," all threats considered? Once again, APAC dominates, but with a different set of cities:

  • Taipei
  • Tokyo
  • Seoul
  • Manila
  • New York (not APAC of course)

This kind of information is interesting. It may even be useful as an approximate indication of where to focus risk mitigation efforts. But without more detail (probability distributions? second-order interaction effects? etc.) it's hard to see what role it would play in a serious risk analysis, existential or commercial or otherwise.

Coda: Despite their application to less-than-existential risks and the superficiality of this particular resource (it is a marketing tool for Lloyd's, after all), perhaps existential riskologists could benefit from looking at the insurance industry. Has this already been done?

[Link] Learning from Human Preferences - from OpenAI (including Christiano, Amodei & Legg)

7 Dr_Manhattan 13 June 2017 03:52PM

Mathematical System For Calibration

0 DragonGod 13 June 2017 12:01PM

I am working on an article titled "You Can Gain Information Through Psychoanalysing Others", with the central thesis being with knowledge of the probability someone assigns a proposition, and their calibration, you can calculate a Bayesian probability estimate for the truthhood of that proposition.                      


For the article, I would need a rigorously mathematically defined system for calculating calibration given someone's past prediction history. I thought of developing one myself, but realised it would be more prudent to inquire if one has already been invented to avoid reinventing the wheel.           


Thanks in advance for your cooperation. :)             




I am chronically afflicted with a serious and invariably fatal epistemic disease known as narcissist bias (this is a misnomer as it refers a broad family of biases). No cure is known yet for narcissist bias, and I’m currently working on cataloguing and documenting the disease in full using myself as a test case. This disease affects how I present and articulate my points—especially in written text—such that I assign a Pr of > 0.8 that a somebody would find this post condescending, self-aggrandising, grandiose or otherwise deluded. This seems to be a problem with all my writing, and a cost of living with the condition I guess. I apologise in advance for any offence received, and inform that I do not intend to offend anyone or otherwise hurt their sensibilities.                  

The Rationalistsphere and the Less Wrong wiki

13 Deku-shrub 12 June 2017 11:29PM

Hi everyone!

For people not acquainted with me, I'm Deku-shrub, often known online for my cybercrime research, as well as fairly heavy involvement in the global transhumanist movement with projects like the UK Transhumanist Party and the H+Pedia wiki.

For almost 2 years year now on and off I have been trying to grok what Less Wrong is about, but I've shirked reading all the sequences end to end, instead focused on the most popular ideas transmitted by Internet cultural osmosis. I'm an amateur sociologist and understanding Less Wrong falls within my wider project of understanding the different trends within the contemporary and historical transhumanist movement.

I'm very keen to pin down today's shape of the rationalistsphere and its critics, and the best place I have found do this is on the wiki. Utilising Cunningham's Law at times, I've been building some key navigational and primer articles on the wiki. However with the very lowest hanging fruit now addressed I ask - what next for the wiki?

Distillation of Less Wrong

There was a historical attempt to summerise all major Less Wrong posts, an interesting but incomplete project. It was also approach without a usefully normalised approach. Ideally, every article would have its own page which could be heavily tagged up with metadata such a themes, importance, length, quality, author and such. Is this the goal of the wiki?

Outreach and communications

Another major project is to fully index the Diaspora across Twitter, Blogs, Tumblr, Reddit, Facebook etc and improve the flow of information between the relevant sub communities.

You'll probably want to join one of the chat platforms if you're interested in getting involved. Hell, there are even a few memes and probably more to collect.

Rationalist research

I'll admit I'm ignorant of the goal of Arbital, but I do love me a wiki for research. Cross referencing and citing ideas, merging, splitting, identifying and fully capturing truly interesting and useful ideas from fanciful and fleeting ones is how I've become an expert in a number fields, just by being the first to assemble All The Things.

Certain ideas like the Paper clip maximizer have some popularity beyond just Less Wrong, but Murder Gandhi doesn't - yet. Polishing these ideas with existing and external references (and maybe blogging about them?) is a great way for the community discussion of yore to make its way into the publications of lazy journalists for dissemination. Hell, RationalWiki has been doing it for years now, they're not the only game in town.


If you have any ideas in these areas, or others just a technical, let me know either here, on the Less Wrong Slack group, or on my talk page and maybe we can make Wikis Great Again? ;)

Am I/Was I a Cultist?

1 DragonGod 12 June 2017 11:21PM

I have been accused repeatedly of being a cultist whenever I wage the rationalist crusade online, and naturally I refute such allegations. However, I cannot deny that I take whatever arguments Yudkowsky (makes whose reasonability I can not ascertain for myself as by default true; an example is the Many Worlds Interpretation of quantum mechanics whose Science is far above my head, but I nonetheless took it as truth—the probabilistic variety and not the absolute kind as such honour I confer only to Mathematics—and was later enlightened that MWI is not as definitive as Yudkowsky makes it out to be, and is far from a consensus in the Scientific community). I was surprised at my blunder considering that Yudkowsky is far from an authority figure on Physics, and even if he was I was not unaware of Huxley's maxim:

The improver of natural knowledge cannot accept authority as such; for them scepticism is the highest of virtues—blind faith the one unpardonable sin.

This was the first warning flag. FUrthermore, around the time after I was introduced to RAZ (and the lesswrong website) I started following RAZ with more fervour than I ever did the Bible; I went as far as to—on multiple occasions—proclaim:

Rationality: From AI to Zombies is my Quran, and Eliezer Yudkowsky my Muhammed.

Someone who was on the traditional rationality side of the debate repeatedly described me as "lapping up Yudkowsky's words like a cultist on koolaid." I was warned by a genuinely good meaning friend that I should never let a single book influence my entire life so much, and I must admit; I never was sceptical towards Yudkowsky's words.

Perhaps the biggest alarm bell, was when I completely lost my shit and told the traditional rationalist that I would put him on permanent ignore if he "ever insults the Lesswrong community again. I am in no way affiliated with Eliezer Yudkowsky or the Lesswrong community and would not tolerate insults towards them". That statement was very significant because of its implications:
1. I was willing to tolerate insults towards myself, but not towards Yudkowsky or Lesswrong.
2. I was defensive about Yudkowsky in a way I'd only ever been about Christianity.
3. I elevated Yudkowsky far above my self and put him on a pedestal; when I was a Christian, I believed that I was the best thing since John the Baptist, and would only ever accord such respect to Christ himself.

That I—as narcissistic as I am—considered the public image of someone I've never interacted with to be of greater importance than my own (I wouldn't die to save my country) should have well and truly shocked me.

I did realise I was according too much respect to Yudkowsky, and have dared to disagree with him (my "Rationality as a Value Decider" for example) since. Yet, I never believed Yudkowsky was infallible in the first place, so it may not be much of an improvement. I thought it possessed a certain dramatic irony, that a follower of the lesswrong blog like myself may have become a cultist. Even in my delusions of grandeur, I accord Eliezer Yudkowsky the utmost respect; such that I often mutter in my head —or proclaim out loud for that matter:

Read Yudkowsky, read Yudkowsky, read Yudkowsky—he's the greatest of us all.

As if the irony were not enough, I decided to write this thread after reading "Guardians of Ayn Rand" (and the linked article) and could not help but see the similarities between the two scenarios.

Any Christians Here?

2 DragonGod 12 June 2017 11:18PM

I’m currently atheist; my deconversion was quite the unremarkable event. September 2015 (I discovered HPMOR in February and RAZ then or in March), I was doing research on logical fallacies to better argue my points for a manga forum, when I came across Rational Wiki; for several of the logical fallacies, they tended to use creationists as examples. One thing lead to another (I was curious why Christianity was being so hated, and researched more on the site) I eventually found a list of how the bible outright contradicts Science and realized the two were mutually incompatible—fundamentalist Christianity at least. I faced my first true crisis of faith and was at a crossroads: “Science or Christianity”? I initially tried to be both a Christian and an atheist, having two personalities for my separate roles, but another Christian pointed out the hypocrisy of my practice, so I chose—and I chose Science. I have never looked back since, though I’ve been tempted to “return to my vomit” and even invented a religion to prevent myself from returning to Christianity and eventually just became a LW cultist. Someone said “I’m predisposed to fervour”; I wonder if that’s true. I don’t exactly have a perfect track record though…
In the times since I departed from the flock, I’ve argued quite voraciously against religion (Christianity in particular (my priors distribute probability over the sample space such that P(Christianity) is higher than the sum of the probabilities of all other religions. Basically either the Christian God or no God at all. I am not entirely sure how rational such an outlook is, especially as the only coherent solution I see to the (paradox of first cause)[ https://en.wikipedia.org/wiki/Cosmological_argument] is an acausal entity, and YHWH is not compatible with any Demiurge I would endorse.)) and was disappointed by the counter-arguments I would receive. I would often lament about how I wish I could have debated against myself before I deconverted (an argument atheist me would win as history tells). After discovering the Rationalist community, I realised there was a better option—fellow rationalists. 
Now this is not a request for someone to (steel man)[https://wiki.lesswrong.com/wiki/Steel_man] Christianity; I am perfectly capable of that myself, and the jury is already in on that debate—Christianity lost. Nay, I want to converse and debate with rationalists who despite their Bayesian enlightenment choose to remain in the flock. My faith was shattered under much worse epistemic hygiene than the average lesswronger, and as such I would love to speak with them, to know exactly why they still believe and how. I would love to engage in correspondence with Christian rationalists.
1. Are there any Christian lesswrongers?
2. Are there any Christian rationalists?

Lest I be accused of no true Scotsman fallacy, I will explicitly define the groups of people I refer to:

  1. Lesswronger: Someone who has read/is reading the Sequences and more or less agrees with the content presented therein.
  2. Rationalist: Someone who adheres to the litany of Tarski.

I think my definitions are as inclusive as possible while being sufficiently specific as to filter out those I am not interested in. If you do wish to get in contact with me, you can PM me here or on Lesswrong, or find me through Discord. My user name is “Dragon God#2745”.
Disclaimer: I am chronically afflicted with a serious and invariably fatal epistemic disease known as narcissist bias (this is a misnomer as it refers a broad family of biases). No cure is known yet for narcissist bias, and I’m currently working on cataloguing and documenting the disease in full using myself as a test case. This disease affects how I present and articulate my points—especially in written text—such that I assign a Pr of > 0.8 that a somebody would find this post condescending, self-aggrandising, grandiose or otherwise deluded. This seems to be a problem with all my writing, and a cost of living with the condition I guess. I apologise in advance for any offence received, and inform that I do not intend to offend anyone or otherwise hurt their sensibilities.
I think I’ll add this disclaimer to all my posts.

Open thread, June. 12 - June. 18, 2017

1 Thomas 12 June 2017 05:36AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Recovering from Failure

0 lifelonglearner 11 June 2017 09:57PM

Why I think worse than death outcomes are not a good reason for most people to avoid cryonics

5 Synaptic 11 June 2017 03:55PM
Content note: torture, suicide, things that are worse than death

TLDR: The world is certainly a scary place if you stop to consider all of the tail risk events that might be worse than death. It's true that there is a tail risk of experiencing one of these outcomes if you choose to undergo cryonics, but it's also true that you risk these events by choosing not to kill yourself right now, or before you are incapacitated by a TBI or neurodegenerative disease. I think these tail risk events are extremely unlikely and I urge you not to kill yourself because you are worried about them, but I also think that they are extremely unlikely in the case of cryonics and I don't think that the possibility of them occurring should stop you from pursuing cryonics. 


Several members of the rationalist community have said that they would not want to undergo cryonics on their legal deaths because they are worried about a specific tail risk: that they might be revived in a world that is worse than death, and that doesn't allow them to kill themselves. For example, lukeprog mentioned this in a LW comment

Why am I not signed up for cryonics?

Here's my model.

In most futures, everyone is simply dead.

There's a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.

What are the relative sizes of those slivers, and how much more likely am I to be revived in the "better" futures than in the "worse" futures? I really can't tell.

I don't seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.

I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don't sign up. I ask to be cremated instead.

Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.


In this post I'm going to explain why I think that, with a few stipulations, the risks of these worse-than-death tail events occurring are close to what you might experience by choosing to undergo your natural lifespan. Therefore, based on revealed preference, in my opinion they are not a good reason for most people to not undergo cryonics. (Although there are, of course, several other reasons for which you might choose to not pursue cryonics, which will not be discussed here.) 


First, some points about the general landscape of the problem, which you are welcome to disagree with: 

- In most futures, I expect that you will still be able to kill yourself. In these scenarios, it's at least worth seeing what the future world will be like so you can decide whether or not it is worth it for you.  
- Therefore, worse-than-death futures are exclusively ones in which you are not able to kill yourself. Here are two commonly discussed scenarios for this, and why I think they are unlikely:  
-- You are revived as a slave for a future society. This is very unlikely for economic reasons: a society with sufficiently advanced technology that it can revive cryonics patients can almost certainly extend lifespan indefinitely and create additional humans at low cost. If society is evil enough to do this, then creating additional humans as slaves is going to be cheaper than reviving old ones with a complicated technology that might not work. 
-- You are revived specifically by a malevolent society/AI that is motivated to torture humans. This is unlikely for scope reasons: any society/AI with sufficiently advanced technology to do this can create/simulate additional persons that will to fit their interests more precisely. For example, an unfriendly AI would likely simulate all possible human/animal/sentient minds until the heat death of the universe, using up all available resources in the universe in order to do so. Your mind, and minds very similar to yours, would already likely be included in these simulations many times over. In this case, doing cryonics would not actually make you worse off. (Although of course you would already be quite badly off and we should definitely try our best to avoid this extremely unlikely scenario!) 

If you are worried about a particular scenario, you can stipulate to your cryonics organization that you would like to be removed from preservation in intermediate steps that make that scenario more likely, thus substantially reducing the risk of them occurring. For example, you might say: 

- If a fascist government that tortures its citizens indefinitely and doesn't allow them to kill themselves seems likely to take over the world, please cremate me. 
- If an alien spaceship with likely malicious intentions approaches the earth, please cremate me. 
- If a sociopath creates an AI that is taking over foreign cities and torturing their inhabitants, please cremate me. 

In fact, you probably wouldn't have to ask... in most of these scenarios, the cryonics organization is likely to remove you from preservation in order to protect you from these bad outcomes out of compassion.   

But even with such a set of stipulations or compassionate treatment by your cryonics organization, it's still possible that you could be revived in a worse-than-death scenario. As Brian Tomasik puts it

> Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).

However, here I would like to point out an additional point: there's no guarantee that these bad scenarios couldn't happen too quickly for you to react today, or in the future before your legal death. 

If you're significantly worried about worse than death outcomes happening in a possible future in which you are cryopreserved, then it seems like you should also be worried about one of them happening in the relatively near term as well. It also seems that you should be anti-natalist. 


You might argue that this is still your true rejection, and that while it's true that a faster-than-react-able malevolent agent could take over the world now or in the near future, you would rather trust yourself to kill yourself than trust your cryonics organization take you out of preservation in these scenarios. 

This is a reasonable response, but one possibility that you might not be considering is that you might undergo a condition that renders you unable to make that decision. 

For example, people can live for decades with traumatic brain injuries, with neurodegenerative diseases, in comas, or other conditions that prevent them from making the decision to kill themselves but retain core aspects of their memories personality that make them "them" (but perhaps is not accessible because of damage to communication systems in the brain). If aging is slowed, these incapacitating conditions could last for longer periods of time. 

It's possible that while you're incapacitated by one of these unfortunate conditions, a fascist government, evil aliens, or a malevolent AI will take over. 

These incapacitating conditions are each somewhat unlikely to occur, but if we're talking about tail events, they deserve consideration. And they aren't necessarily less unlikely than being revived from cryostasis, which is of course also far from guaranteed to work.

It might sound like my point here is "cryonics: maybe not that much worse than living for years in a completely incapacitating coma?", which is not necessarily the most ringing endorsement of cryonics, I admit. 

But my main point here is that your revealed preferences might indicate that you are more willing to tolerate some very, very small probability of things going horribly wrong than you realize. 

So if you're OK with the risk that you will end up in a worse-than-death scenario even before you do cryonics, then you may also be OK with the risk that you will end up in a worse-than-death scenario after you are preserved via cryonics (both of which seem very, very small to me). Choosing cryonics doesn't "open up" this tail risk that is very bad and would never occur otherwise. It already exists. 

We are the Athenians, not the Spartans

13 wubbles 11 June 2017 05:53AM

The Peloponnesian War was a war between two empires: the seadwelling Athenians, and the landlubber Spartans. Spartans were devoted to duty and country, living in barracks and drinking the black broth. From birth they trained to be the caste dictators of a slaveowning society, which would annually slay slaves to forestall a rebellion. The most famous Spartan is Leonidas, who died in a heroic last stand delaying the invasion of the Persians. To be a Spartan was to live a life devoted to toughness and duty.

Famous Athenians are Herodotus, inventor of history, Thucydides, Socrates, Plato, Hippocrates of the oath medical students still take, all the Greek playwrights, etc.  Attic Greek is the Greek we learn in our Classics courses. Athens was a city where the students of the entire known Greek world would come to learn from the masters, a maritime empire with hundreds of resident aliens, where slavery was comparable to that of the Romans. Luxury apartments, planned subdivisions, sexual hedonism, and free trade made up the life of the Athenian elite.

These two cities had deeply incompatible values. Spartans lived in fear that the Helots would rebel and kill them. Deeply suspicious of strangers, they imposed oligarchies upon the cities they conquered. They were described by themselves and others cautious and slow to act. Athenians by contrast prized speed and risk in their enterprises. Foreigners could live freely in Athens and even established their own temples. Master and slave comedies of Athens inspired PG Woodhouse.

All intellectual communities are Athenian in outlook. We remember Sparta for its killing and Athens for its art. If we want the rationalist community to tackle the hard problems, if we support a world that is supportive of human values and beauty, if we yearn to end the plagues of humanity, our values should be Athenian: individualistic, open, trusting, enamoured of beauty. When we build social technology, it should not aim to cultivate values that stand against these.

High trust, open, societies are the societies where human lives are most improved. Beyond merely being refugees for the persecuted they become havens for intellectual discussion and the improvement of human knowledge and practice. It is not a coincidence that one city produced Spinoza, Rubens, Rembrandt, van Dyke, Huygens, van Leeuwenhoek, and Grotius in a few short decades, while dominating the seas and being open to refugees.

Sadly we seem to have lost sight of this in the rationality community. Increasingly we are losing touch as a community with the outside intellectual world, without the impetus to study what has been done before and what the research lines are in statistics, ML, AI, epistemology, biology, etc. While we express that these things are important, the conversation doesn't seem to center around the actual content of these developments. In some cases (statistics) we're actively hostile to understanding some of the developments and limitations of our approach as a matter of tribal marker.

Some projects seem to me to be likely to worsen this, either because they express Spartan values or because they further physical isolation in ways that will act to create more small-group identification.

What can we do about this? Holiday modifications might help with reminding us of our values, but I don't know how we can change the community's outlook more directly. We should strive to stop merely acting on the meta-level and try to act on the object level more as a community. And lastly, we should notice that our values are real and not universal, and that they need defending.

[Link] Where do hypotheses come from?

3 c0rw1n 11 June 2017 04:05AM

Bi-Weekly Rational Feed

11 deluks917 10 June 2017 09:56PM

===Highly Recommended Articles:

Bring Up Genius by Viliam (lesswrong) - An "80/20" translation. Positive motivation. Extreme resistance from the Hungarian government and press. Polgar's five principles. Biting criticism of the school system. Learning in early childhood. Is Genius a gift or curse? Celebrity. Detailed plan for daily instruction. Importance of diversity. Why chess? Teach the chess with love, playfully. Emancipation of women. Polgar's happy family.

The Shouting Class by Noah Smith - The majority of comments come from a tiny minority of commentators. Social media is giving a bullhorn to the people who constantly complain. Negativity is contagious. The level of discord in society is getting genuinely dangerous. The French Revolution. The author criticizes shouters on the Left and Right.

How Givewell Uses Cost Effectiveness Analyses by The GiveWell Blog - GiveWell doesn't take its estimates literally, unless one charity is measured as 2-3x as cost-effective GiveWell is unsure if a difference exists. Cost-effective is however the most important factor in GiveWell's recommendations. GiveWell goes into detail about how it deals with great uncertainty and suboptimal data.

Mode Collapse And The Norm One Principle by tristanm (lesswrong) - Generative Adversarial Networks. Applying the lessons of Machine Learning to discourse. How to make progress when the critical side of discourse is very powerful. "My claim is that any contribution to a discussion should satisfy the "Norm One Principle." In other words, it should have a well-defined direction, and the quantity of change should be feasible to implement."

The Face Of The Ice by Sarah Constantin (Otium) - Mountaineering. Survival Mindset vs Sexual-Selection Mindset. War and the Wilderness. Technical Skill.

Bayes: A Kinda Sorta Masterpost by Nostalgebraist - A long and very well thought-out criticism of Bayesianism. Explanation of Bayesian methodology. Comparison with classical statistics. Arguments for Bayes. The problem of ignored hypotheses with known relations. The problem of new ideas. Where do priors come from? Regularization and insights from machine learning.


SSC Journal Club Ai Timelines by Scott Alexander - A new paper surveying what Ai experts think about Ai progress. Contradictory results about when Ai will surpass humans at all tasks. Opinions on Ai risk, experts are taking the arguments seriously.

Terrorism and Involuntary Commitment by Scott Alexander (Scratchpad) - The leader of the terrorist attack in London was in a documentary about jihadists living in Britain. “Being the sort of person who seems likely to commit a crime isn’t illegal.” Involuntary commitment.

Is Pharma Research Worse Than Chance by Scott Alexander - The most promising drugs of the 21st century are MDMA and ketamine (third is psilocybin). These drugs were all found by the drug community. Maybe pharma should look for compounds with large effect sizes instead of searching for drugs with no side-effects.

Open Thread 77- Opium Thread by Scott Alexander - Bi-weekly open thread. Includes some comments of the week and an update on translating "Bringing Up Genius".

Third and Fourth Thoughts on Dragon Army by SlateStarScratchpad. - Scott goes from Anti-Anti-Dragon-Army to Anti-Dragon-Army. He then gets an email from Duncan and updates in favor of the position that Duncan thought things out well.

Hungarian Education III Mastering The Core Teachings Of The Budapestians by Scott Alexander - Lazlo Polgar wanted to prove he could intentionally raise chess geniuses. He raised the number 1,2 and 6 female chess players in the world?

Four Nobel Truths by Scott Alexander - Four Graphs describing facts about Israeli/Askenazi Nobel Prizes.


The Precept Of Niceness by H i v e w i r e d - Prisoner's Dilemma's. Even against a truly alien opponent you should still cooperate as long as possible on the iterated prisoner's dilemma, even with fixed round lengths, play tit-for-tat. Niceness is the best strategy.

Epistemology Vs Critical Thinking by Onemorenickname (lesswrong) - Epistemies work. General approaches don't work. Scientific approaches work. Epistemic effort vs Epistemic status. Criticisms of lesswrong Bayesianism.

Tasting Godhood by Agenty Duck - Poetic and personal. Wine tasting. Empathizing with other people. Seeing others as whole people. How to dream about other people. Sci-fi futures. Tasting godhood is the same as tasting other people. Looking for your own godhood.

Bayes: A Kinda Sorta Masterpost by Nostalgebraist - A long and very well thought-out criticism of Bayesianism. Explanation of Bayesian methodology. Comparison with classical statistics. Arguments for Bayes. The problem of ignored hypotheses with known relations. The problem of new ideas. Where do priors come from? Regularization and insights from machine learning.

Dichotomies by mindlevelup - 6 short essays about dichotomies and whats useful about noticing them. Fast vs Slow thinking. Focused vs Diffuse Mode. Clean vs Dirty Thinking. Inside vs Outside View. Object vs Meta level. Generative vs Iterative Mode. Some conclusions about the method.

How Men And Women Perceive Relationships Differently by AellaGirl - Survey Results about Relationship quality over time. Lots of graphs and a link to the raw data. "In summary, time is not kind. Relationships show an almost universal decrease in everything good the longer they go on. Poly is hard, and you have to go all the way to make it work – especially for men. Religion is also great, if you’re a man. Women get more excited and insecure, men feel undesirable."

Summer Programming by Jacob Falkovich (Put A Number On It!) - Jacob's Summer writing plan. Re-writing part of the lesswrong sequences. Ribbonfarm's longform blogging course on refactored perception.

Bet Or Update Fixing The Will to Wager Assumption by cousin_it (lesswrong) - Betting with better informed agents is irrational. Bayesian agents should however update their prior or agree to bet. Good discussion in comments.

Kindness Against The Grain by Sarah Constantin (Otium) - Sympathy and forgiveness evolved to follow local incentive gradients. Some details on we sympathize with and who we don't. The difference between a good deal and a sympathetic deal. Smooth emotional gradients and understanding what other people want. Forgiveness as not following the local gradient and why this can be useful.

Bring Up Genius by Viliam (lesswrong) - An "80/20" translation. Positive motivation. Extreme resistance from the Hungarian government and press. Polgar's five principles. Biting criticism of the school system. Learning in early childhood. Is Genius a gift or curse? Celebrity. Detailed plan for daily instruction. Importance of diversity. Why chess? Teach the chess with love, playfully. Emancipation of women. Polgar's happy family.

Deorbiting A Metaphor by H i v e w i r e d - Another post in the origin sequence. Rationalist Myth-making. (note: I am unlikely to keep linking all of these. Follow hivewired’s blog)

Conformity Excuses by Robin Hanson - Human behavior is often explained by pressure to conform. However we consciously experience much less pressure. Robin discusses a list of ways to rationalize conforming.

Becoming A Better Community by Sable (lesswrong) - Lesswrong holds its memebers to a high standard. Intimacy requires unguarded spontaneous interactions. Concrete ideas to add more fun and friendship to lesswrong.

Optimizing For Meta Optimization by H i v e w i r e d - A very long list of human cultural universals and comments on which ones to encourage/discourage: Myths, Language, Cognition, Society. Afterwards some detailed bullet points about an optimal dath ilanian culture.

On Resignation by Small Truths - Artificial intelligence. "It’s an embarrassing lapse, but I did not think much about how the very people who already know all the stuff I’m learning would behave. I wasn’t thinking enough steps ahead. Seen in this context, Neuralink isn’t an exciting new tech venture so much as a desperate hope to mitigate an unavoidable disaster."

Cognitive Sciencepsychology As A Neglected by Kaj Sotala (EA forum) - Ways psychology could benefit AI safety: "The psychology of developing an AI safety culture, Developing better analyses of 'AI takeoff' scenarios, Defining just what it is that human values are, Better understanding multi-level world-models." Lots of interesting links.

Mode Collapse And The Norm One Principle by tristanm (lesswrong) - Generative Adversarial Networks. Applying the lessons of Machine Learning to discourse. How to make progress when the critical side of discourse is very powerful. "My claim is that any contribution to a discussion should satisfy the "Norm One Principle." In other words, it should have a well-defined direction, and the quantity of change should be feasible to implement."

Finite And Infinite by Sarah Constantin (Otium) - "James Carse, in Finite and Infinite Games, sets up a completely different polarity, between infinite game-playing (which is open-ended, playful, and non-competitive) vs. finite game-playing (which is definite, serious, and competitive)." Playfulness, property, and cooperating with people who seriously weird you out.

Script for the rationalist seder is linked by Raemon (lesswrong) - An explanation of Rationalist Seder, a remix of the Passover Seder refocused on liberation in general. A story of two tribes and the power of stories. The full Haggadah/script for the rationalist Seder is linked.

The Personal Growth Cycle by G Gordon Worley (Map and Territory) - Stages of Development. "Development starts from a place of integration, followed by disintegration into confusion, which through active efforts at reintegration in a safe space results in development. If a safe space for reintegration is not available, development may not proceed."

Until We Build Dath Ilan by H i v e w i r e d - Eliezer's Sci-fi utopia Dath Ilan. The nature of the rationalist community. A purpose for the rationality community. Lots of imagery and allusions. A singer is someone who tries to do good.

Do Ai Experts Exist by Bayesian Investor - Some of the numbers from " When Will AI Exceed Human Performance? Evidence from AI Experts" don't make sense.

Relinquishment Cultivation by Agenty Duck - Agenty Duck designs meditation to cultivate the attitude of "If X is true I wish to believe X, if X is not true I wish to believe not X". The technique is inspired by 'loving-kindness' meditation.

10 Incredible Weaknesses Of The Mental Health by arunbharatula (lesswrong) - Ten arguments that undermine the credibility of the mental health workforce. Some of the arguments are sourced and argued significantly more thoroughly than other.

Philosophical Parenthood by SquirrelInHell - Updateless Decision theory. Ashkenazi intelligence. "In this post, I will lay out a strong philosophical argument for rational and intelligent people to have children. It's important and not obvious, so listen well."

On Connections Between Brains And Computers by Small Truths - A condensation of Tim Ubran's 36K word article about Neuralink. The astounding benefits of having even a SIRI level Ai responding directly to your thoughts. The existential threat of Ai means that mind-computer links are worth the risks.

Thoughts Concerning Homeschooling by Ozy (Thing of Things) - Evidence that many public school practices are counter-productive. Stats on the academic performance of home-schoolers. Educating 'weird awkward nerds'.

The Face Of The Ice by Sarah Constantin (Otium) - Mountaineering. Survival Mindset vs Sexual-Selection Mindset. War and the Wilderness. Technical Skill.


Review Of Ea New Zealands Doing Good Better Book by cafelow (EA forum) - New Zealand EAs gave out 250 copies of "Doing Good Better". 80 of the recipients responded to a follow up survey. The results were extremely encouraging. Survey details and discussion. Possible flaws with the giveaway and survey.

Announcing Effective Altruism Grants by Maxdalton (EA forum) - CEA is giving out £100,000 grants for personal projects. "We believe that providing those people with the resources that they need to realize their potential could be a highly effective use of resources." A list of what projects could get funded, the list is very broad. Evaluation criteria.

A Powerful Weapon in the Arsenal (Links Post) by GiveDirectly - 8 Links on Basic Income, Effective Altruism, Cash Transfers and Donor Advised Funds

A Paradox In The Measurement Of The Value Of Life by klloyd (EA forum) - Eight Thousand words on: “A Health Economics Puzzle: Why are there apparent inconsistencies in the monetary valuation of a statistical life (VSL) and a quality-adjusted life year (QALY$)?”

New Report Consciousness And Moral Patienthood by Open Philosophy - “In short, my tentative conclusions are that I think mammals, birds, and fishes are more likely than not to be conscious, while (e.g.) insects are unlikely to be conscious. However, my probabilities are very “made-up” and difficult to justify, and it’s not clear to us what actions should be taken on the basis of such made-up probabilities.”

Adding New Funds To Ea Funds by the Center for Effective Altruism (EA forum) - The Center for Effective Altruism wants feedback on whether it should add more EA funds. Each question is followed by a detailed list of critical considerations.

How Givewell Uses Cost Effectiveness Analyses by The GiveWell Blog - GiveWell doesn't take its estimates literally, unless one charity is measured as 2-3x as cost-effective GiveWell is unsure if a difference exists. Cost-effective is however the most important factor in GiveWell's recommendations. GiveWell goes into detail about how it deals with great uncertainty and suboptimal data.

The Time Has come to Find Out [Links] by GiveDirectly - 8 media links related to Cash Transfers, Give Directly and Effective Altruism.

Considering Considerateness: Why Communities Of Do Gooders Should Be by The Center for Effective Altruism - Consequentialist reasons to be considerate and trustworthy. Detailed and contains several graphs. Include practical discussions of when not to be considerate and how to handle unreasonable preferences. The conclusion discusses how considerate EAs should be. The bibliography contains many very high quality articles written by the community.

===Politics and Economics:

Summing Up My Thoughts On Macroeconomics by Noah Smith - Slides from Noah's talk at the Norwegian Finance Ministry. Comparison of Industry, Central Bank and Academic Macroeconomics. Overview of important critiques of academic macro. The DGSE standard mode and ways to improve it. What makes a good Macro theory. Go back to the microfoundations.

Why Universities Cant Be The Primary Site Of Political Organizing by Freddie deBoer - Few people on campus. Campus activism is seasonal. Students are an itinerant population. Town and gown conflicts. Students are too busy. First priority is employment. Is activism a place for student growth?. Labor principles.

Some Observations On Cis By Default Identification by Ozy (Thing of Things) - Many 'cis-by-default' people are repressing or not noticing their gender feelings. This effect strongly depends on a person's community.

One Day We Will Make Offensive Jokes by AellaGirl - "This is why I feel suspicious of some groups that strongly oppose offensive jokes – they have the suspicion that every person is like my parents – that every human “actually wants” all the terrible things to happen."

Book Review Weapons Of Math Destruction by Zvi Moshowitz - Extremely long. "What the book is actually mostly about on its surface, alas, is how bad and unfair it is to be a Bayesian. There are two reasons, in her mind, why using algorithms to be a Bayesian is just awful."

A Brief Argument With Apparently Informed Global Warming Denialists by Artir (Nintil) - Details of the back and forth argument. So commentary on practical rationality and speculation about how the skeptic might have felt.

The Shouting Class by Noah Smith - The majority of comments come from a tiny minority of commentators. Social media is giving a bullhorn to the people who constantly complain. Negativity is contagious. The level of discord in society is getting genuinely dangerous. The French Revolution. The author criticizes shouters on the Left and Right.

Population By Country And Region 10K BCE to 2016 CE by Luke Muehlhauser - 204 countries, 27 region. Links to the database used and a forthcoming explanatory paper. From 10K BCE to 0 CE gaps are 1000 years. From 0 CE to 1700 CE gaps are 100 years. After that they are 10 years long.

Regulatory Lags For New Technology 2013 Notes by gwern (lesswrong) - Gwern looks at the history of regulation for high frequency trading, self driving cars and hacking. The post is mostly comprised of long quotes from articles linked by gwern.

Two Economists Ask Teachers To Behave As Irrational Actors by Freddie deBoer - A response to Cowen's interview of Raj Chetty. Standard Education reform rhetoric implies that hundreds of thousands of teachers need to be fired. However teachers don't control most of the important inputs to student performance. You won't get more talented teachers unless you increase compensation.

Company Revenue Per Employee by Tyler Cowen - The energy sector has high revenue per employee. The highest score was attained by a pharmaceutical distributor. Hotels, restaurants and consumer discretionaries do the worst on this metric. Tech has a middling performance.


A Remark On Usury by Entirely Useless - "To take usury for money lent is unjust in itself, because this is to sell what does not exist, and this evidently leads to inequality which is contrary to justice." Thomas Aquinas is quoted at length explaining the preceding statement. EntirelyUseless argues that Aquinas mixes up the buyer and the seller.

Bike To Work Houston by Mr. Money Mustache - How a lawyer bikes to work in Houston. Bikes are surprisingly fast relative to cars in cities. Houston is massive.

Fuckers Vs Raisers by AellaGirl - Evolutionary psychology. The qualities that are attractive in a guy who sleeps around are also attractive in a guy who wants to settle down.

Reducers Transducers And Coreasync In Clojure by Eli Bendersky - "I find it fascinating how one good idea (reducers) morphed into another (transducers), and ended up mating with yet another, apparently unrelated concept (concurrent pipelines) to produce some really powerful coding abstractions."

Thingness And Thereness by Venkatesh Rao (ribbonfarm) - The relation between politics, home and frontier. Big Data, deep learning and the blockchain. Liminal spaces and conditions.

Create 2314 by protokol2020 - Find the shortest algorithm to create the number 2314 using a prescribed set of operations.

Text To Speech Speed by Jeff Kaufman - Text to speech has become a very efficient way to interact with computers. Questions about settings. Very short.

Hello World! Stan, Pymc3 and Edward by Bob Carpenter (Gelman's Blog) - Comparison of the three frameworks. Test case of Bayesian linear regression. Extendability and efficiency of the frameworks is discussed.

Computer Science Majors by Tyler Cowen - Tyler links to an article by Dan wang. The author gives 11 reasons why CS majors are rare, none of which he finds convincing. Eventually the author seems to conclude that the 2001 bubble, changing nature of the CS field, power law distribution in developer productivity and lack of job security are important causes.

Beespotting On I-5 by Eukaryote - Drive from San Fran to Seattle. The vast agricultural importance of Bees. Improving Bee quality of life.


81 Leaving Islam by Waking Up with Sam Harris - "Sarah Haider. Her organization Ex-Muslims of North America, how the political Left is confused about Islam, "rape culture" under Islam, honesty without bigotry, stealth theocracy, immigration, the prospects of reforming Islam"

Newcomers by Venam - A transcript of a discussion about advice for new Unix users. Purpose. Communities. Learning by Yourself. Technical Tips. Venam linked tons of podcast transcripts today. Check them out.

Masha Gessen, Russian-American Journalist by The Ezra Klein Show - Trump and Russia, plausible and sinister explanation. Ways Trump is and isn't like Putin, studying autocracies, the psychology of Jared Kushner

Christy Ford by EconTalk - "A history of how America's health care system came to be dominated by insurance companies or government agencies paying doctors per procedure."

Nick Szabo by Tim Ferriss - "Computer scientist, legal scholar, and cryptographer best known for his pioneering research in digital contracts and cryptocurrency."

The Road To Tyranny by Waking Up with Sam Harris - Timothy Snyder. His book On Tyranny: Twenty Lessons from the Twentieth Century.

Hans Noel On The Role Of Ideology In Politics by Rational Speaking - "Why the Democrats became the party of liberalism and the Republicans the party of conservatism, whether voters are hypocrites in the way they apply their ostensible ideology, and whether politicians are motivated by ideals or just self-interest."

Stupid Questions June 2017

3 gilch 10 June 2017 06:32PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

[Link] Two Major Obstacles for Logical Inductor Decision Theory

1 endoself 10 June 2017 05:48AM

Epistemology vs Critical Thinking

0 Onemorenickname 10 June 2017 02:41AM

Short vocabulary points :

  • By epistemy, I refer to to the second meaning of epistemology in the wiktionary (ie, a particular theory of knowledge). Polysemy is bad and should be fixed when possible.
  • By episteme, I mean the knowledge and understanding of a given science at a given point in time.
  • By field, I mean a set of related thoughts. A science is a field with an epistemy.
  • Epistemy comes from "epistimi", meaning science. I like the identification of a science with its epistemy.

Epistemic ... :
  • Effort. Much more reasoning behind that post. I'm mostly trying to see if people are interested. If they are, much more writing will ensue.
  • Status. Field : Rationalist epistemology. Phase : pre-epistemy.

Epistemies work.

General approaches don't work.

Model-checking, validity and proof-search can be hard. Like, NP, PSPACE, non-elementary hard or even undecidable. Particularly, validity of propositions in first-order logic is undecidable.

Our propositions about the world are more complex than what's described by first order logic. Making it impossible to prove validity of propositions in the general case. As such, trying to find a general logic to deal with the world (ie, critical thinking) is energy badly spent.

Specific approaches work.

This problem has been answered in fields relying on logic. For instance : model checking, type theory or not-statistical computational linguistics. The standard answer is finding specialized and efficiently computable logics.

However, not every field can afford a full formalization. At least, as humans studying the world, we can't. Epistemies can be seen as detailed informal efficient logics. They give us a particular way to study some particular thing, just like logics. They don't provide mathematical guarantees, but they still can offer guarantees.

Science faced that problem, as the study of world by humans. However, critical thinking wasn't enough. That's partly why we moved from philosophy to sciences. The Science solution was to subdivide the world into several overlapping-but-independently-experimentable parts.

Thus, rather than by its object of study, a science is defined by a combination of its object of study and its epistemy. This explains why 3 different articles studying logic can be discriminated by their science : to Philosophy, Math and Theoretical Computer Science.


Stopping redundancy.

Valuing critical thought led to a high amount of redundancy. Anyone can dump their ideas and have it judged by the community, provided a bit of critical thinking has been done. The core insight being that critical thinking should filter most of the bad ideas.

However, if the subject of the idea relies a bit on a non-epistemied technical field, obfuscating lack of consistency/thorough-thinking becomes very easy. As such, community time is spent finding obvious flaws in a reasoning when the author could do it alone provided there was an appropriate epistemy.

Epistemic effort.

As such, before suggesting new models, one should confront it to the standard epistemy of the fields the model belong to. That epistemy can be as simple as some sanity checks, eg : "Does this model lead to X, Y or Z contradiction ? If it does, it's a bad one. Else, it is a good one.". If there is no standard epistemy in the given field, working on one is critical.

I agree Raemon's post about using "epistemic effort" instead of "epistemic status". Following the previous line of thought, I think "epistemic status" should refer to an epistemic status (and the field relative to which it is defined) instead of the epistemic effort. I see 3 kinds of epistemic status, that could be refined :

1. Pre-epistemy : Thoughts meant to gather comments. Models trying to see if modelling a particular subject is worth it or works well.

2. Epistemy building : Defining the epistemy meta-assumptions. Defining the epistemy logic. Defining the epistemy facts (eg, Which sources are relevant in that field ? Which meta-facts are relevant in that field ?).

3. Post-epistemy : Once the epistemy is defined, anything benefiting the science's episteme. Facts, models, questioning the epistemy (might lead to forks, eg, math and computer science).


"Bayesian probabilities"

Initially, I thought that someone putting a probability in front of a belief had an objective meaning. I asked around for an epistemy, and I have been told that it was only a way to express more precisely a subjective feeling.

However, it looks like there might be a confusion between the map and the territory when I see things like bet-to-update. Because when I see "Bayesian rational agent", it feels like we should be supposed to be bayesian rational agents in the general case. (Which I think is an AGI-complete problem.)

Bayesian framework

Bayesian rules and its derivatives define the "proof rules" part of an agent's epistemy. But axioms are still required, a world, a way to gather facts and such. It also relies on meta-assumption for efficiency and relevancy. Bayesian rules are not enough to define an epistemy.

Therefore, not only I am strongly prejudiced against someone self-describing as a bayesianist because of the "I apply the same epistemy everywhere"-approach, but also because it isn't a proper epistemy.

There are better ways to say "I know the Bayes' rule, and how it might apply to real-life situation." than "I'm bayesianist".

Maybe "bayesianist" solely means "I update my beliefs based on evidence", but I think "open-minded" is the right word for that.

Not even wrong

Showing not-even-wrong-ness is possible in sciences with an epistemy. (Well, it's possible to show it to people who know that epistemy. Showing someone who don't know maths that his "informal" maths aren't even maths is hard.)

In other fields, we are subject to too much not-even-wrong-ness. I'd like to link some LW posts to exemplify my point, but I think it might violate the local BNBR policy.


Do you think defining a meta-epistemy (ie, an epistemy to the Rationalist epistemology) is important ?

Do you think defining sub-epistemies is important ?

If you don't, why ?




 agree : The direct transitivity is meant. To agree something and to agree with/to something have different connotations.

Humans are not agents: short vs long term

4 Stuart_Armstrong 09 June 2017 11:16AM

Crossposted at the Intelligent Agents Forum.

This is an example of humans not being (idealised) agents.

Imagine a human who has a preference to not live beyond a hundred years. However, they want to live to next year, and it's predictable that every year they are alive, they will have the same desire to survive till the next year.

This human (not a completely implausible example, I hope!) has a contradiction between their long and short term preferences. So which is accurate? It seems we could resolve these preferences in favour of the short term ("live forever") or the long term ("die after a century") preferences.

Now, at this point, maybe we could appeal to meta-preferences - what would the human themselves want, if they could choose? But often these meta-preferences are un- or under-formed, and can be influenced by how the question or debate is framed.

Specifically, suppose we are scheduling this human's agenda. We have the choice of making them meet one of two philosophers (not meeting anyone is not an option). If they meet Professor R. T. Long, he will advise them to follow long term preferences. If instead, they meet Paul Kurtz, he will advise them to pay attention their short term preferences. Whichever one they meet, they will argue for a while and will then settle on the recommended preference resolution. And then they will not change that, whoever they meet subsequently.

Since we are doing the scheduling, we effectively control the human's meta-preferences on this issue. What should we do? And what principles should we use to do so?

It's clear that this can apply to AIs: if they are simultaneously aiding humans as well as learning their preferences, they will have multiple opportunities to do this sort of preference-shaping.

Bring up Genius

45 Viliam 08 June 2017 05:44PM

(This is a "Pareto translation" of Bring up Genius by László Polgár, the book recently mentioned at Slate Star Codex. I hope that selected 20% of the book text, translated approximately, could still convey 80% of its value, while taking an order of magnitude less time and work than a full and precise translation. The original book is written in an interview form, with questions and answers; to keep it short, I am rewriting it as a monologue. I am also taking liberty of making many other changes in style, and skipping entire parts, because I am optimizing for my time. Instead of the Hungarian original, I am using an Esperanto translation Eduku geniulon as my source, because that is the language I am more fluent in.)


Genius = work + luck

This is my book written in 1989 about 15 years of pedagogic experiment with my daughters. It is neither a recipe, nor a challenge, just a demonstration that it is possible to bring up a genius intentionally.

The so-called miracle children are natural phenomena, created by their parents and society. Sadly, many potential geniuses disappear without anyone noticing the opportunity, including themselves.

Many people in history did a similar thing by accident; we only repeated it on purpose.

1. Secrets of the pedagogic experiment

1.1. The Polgár family

The Polgár sisters (Susan, Sofia, Judit) are internationally as famous as Rubik Ernő, the inventor of the Rubik Cube.

Are they merely their father's puppets, manipulated like chess figures? Hardly. This level of success requires agency and active cooperation. Puppets don't become geniuses. Contrariwise, I provided them opportunity, freedom, and support. They made most of the decisions.

You know what really creates puppets? The traditional school system. Watch how kids, eagerly entering school in September, mostly become burned out by Christmas.

Not all geniuses are happy. Some are rejected by their environment, or they fail to achieve their goals. But some geniuses are happy, accepted by their environment, succeed, and contribute positively to the society. I think geniuses have a greater chance to be happy in life, and luckily my daughters are an example of that.

I was a member of the Communist Party for over ten years, but I disagreed with many things; specifically the lack of democracy, and the opposition to elite education.

I work about 15 hours a day since I was a teenager. I am obsessed with high quality. Some people say I am stubborn, even aggressive. I am trying hard to achieve my goals, and I experienced a lot of frustration; seems to me some people were trying to destroy us. We were threatened by high-ranking politicians. We were not allowed to travel abroad until 1985, when Susan was already the #1 in international ranking of female chess players.

But I am happy that I have a great family, happy marriage, three successful children, and my creative work has an ongoing impact.

1.2 Nature or nurture?

I believe that any biologically healthy child can be brought up to a genius. Me and my wife have read tons of books and studies. Researching the childhoods of many famous people that they all specialized early, and each of them had a strongly supportive parent or teacher or trainer. We concluded: geniuses are not born; they are made. We proved that experimentally. We hope that someone will build a coherent pedagogical system based on our hypothesis.

Most of what we know about genetics [as of 1989] is about diseases. Healthy brains are flexible. Education was considered important by Watson and Adler. But Watson never actually received the "dozen healthy infants" to bring up, so I was the first one to do this experiment. These are my five principles:

* Human personality is an outcome of the following three: the gifts of nature, the support of environment, and the work of one's own. Their relative importance depends on age: biology is strongest with the newborn, society with the ten years old, and later the importance of one's own actions grows.

* There are two aspects of social influence: the family, and the culture. Humans are naturally social, so education should treat the child as a co-author of themselves.

* I believe that any healthy child has sufficient general ability, and can specialize in any type of activity. Here I differ from the opinion of many teachers and parents who believe that the role of education is to find a hidden talent in the child. I believe that the child has a general ability, and achieves special skills by education.

* The development of the genius needs to be intentionally organized; it will not happen at random.

* People should strive for maximum possible self-realization; that brings happiness both to them and to the people around them. Pedagogy should not aim for average, but for excellence.

2. A different education

2.1. About contemporary schools

We homeschooled our children. Today's schools set a very low bar, and are intolerant towards people different from the average by their talent or otherwise. They don't prepare for real life; don't make kids love learning; don't instigate greater goals; bring up neither autonomous individuals nor collectives.

Which is an unsurprising outcome, if you only have one type of school, each school containing a few exceptional kids among many average ones and a few feeble ones. Even the average ones are closer to the feeble ones that to the exceptional ones. And the teacher, by necessity, adapts to the majority. There is not enough space for individual approach, but there is a lot of mindless repetition. Sure, people talk a lot about teaching problem-solving skills, but that never happens. Both the teachers and the students suffer at school.

The gifted children are bored, and even tired, because boredom is more tedious than appropriate effort. The gifted children are disliked, just like everyone who differs from the norm. Many gifted children acquire psycho-somatic problems, such as insomnia, headache, stomach pain, neuroses. Famous people often had trouble at school; they were considered stupid and untalented. There is bullying, and general lack of kindness. There are schools for gifted children in USA and USSR, but somehow not in Hungary [as of 1989].

I had to fight a lot to have my first daughter home-schooled. I was afraid school would endanger the development of her abilities. We had support of many people, including pedagogues, but various bureaucrats repeatedly rejected us, sometimes with threats. Finally we received an exceptional permission by the government, but it only applied for one child. So with the second daughter we had to go through the same process again.

2.2. Each child is a promise

It is crucial to awaken and keep the child's interest, convince them that the success is achievable, trust them, and praise them. When the child likes the work, it will work fruitfully for long time periods. A profound interest develops personality and skills. A motivated child will achieve more, and get tired less.

I believe in positive motivation. Create a situation where many successes are possible. Successes make children confident; failures make them insecure. Experience of success and admiration by others motivates and accelerates learning. Failure, fear, and shyness decrease the desire to achieve. Successes in one field even increase confidence in other fields.

Too much praise can cause overconfidence, but it is generally safer to err on the side of praising more rather than less. However, the praise must be connected to a real outcome.

Discipline, especially internal psychological, also increases skills.

I believe the age between 3 and 6 years is very important, and very underestimated. No, those children are not too young to learn. Actually, that's when their brains are evolving the most. They should learn foreign languages. In multilingual environments children do that naturally.

Play is important for children, but play is not an opposite of work. Gathering information and solving problems is fun. Provide meaningful activities, instead of compartmentalized games. A game without learning is merely a surrogate activity. Gifted children prefer games that require mental activity. There is a continuum between learning and playing (just like between work and hobby for adults). Brains, just like muscles, becomes stronger by everyday activity.

My daughters used intense methods to learn languages; and chess; and table tennis. Is there a risk of damaging their personality by doing so? Maybe, but I believe the risks of damaging the personality by spending six childhood years without any effort are actually greater.

When my daughters were 15, 9, 8 years old, we participated in a 24-hour chess tournament, where you had to play 100 games in 24 hours. (Most participants were between age 25 and 30.) Susan won. The success rates during the second half of the tournament were similar to those during the first half of the tournament, for all three girls, which shows that children are capable of staying focused for long periods of time. But this was an exceptional load.

2.3. Genius - a gift or a curse?

I am not saying that we should bring up each child as a genius; only that bringing up children as geniuses is possible. I oppose uniform education, even a hypothetical one that would use my methods.

Public ideas of geniuses is usually one of two extremes. Either they are all supposed to be weird and half-insane; or they are all supposed to be CEOs and movie stars. Psychology has already moved beyond this. They examined Einstein's brain, but found no difference in weight or volume compared with an average person. For me, genius is an average person who has achieved their full potential. Many famous geniuses attribute their success to hard work, discipline, attention, love of work, patience, time.

All healthy newborns are potential geniuses, but whether they become actual geniuses, depends on their environment, education, and their own effort. For example, in the 20th century more people became geniuses than in the 19th or 18th century, inter alia because of social changes. Geniuses need to be liberated. Hopefully in the future, more people will be free and fully developed, so being a genius will become a norm, not an exception. But for now, there are only a few people like that. As people grow up, they lose the potential to become geniuses. I estimate that an average person's chance to become a genius is about 80% at age 1; 60% at age 3; 50% at age 6; 40% at age 12; 30% at age 16; 20% at age 18; only 5% at age 20. Afterwards it drops to a fraction of percent.

A genius child can surpass their peers by 5 or 7 years. And if a "miracle child" doesn't become a "miracle adult", I am convinced that their environment did not allow them to. People say some children are faster and some are slower; I say they don't grow up in the same conditions. Good conditions allow one to progress faster. But some philosophers or writers became geniuses at old age.

People find it difficult to accept those who differ from the average. Even some scientists; for example Einstein's theory of relativity was opposed by many. My daughters are attacked not just by public opinion, but also by fellow chess players.

Some geniuses are unhappy about their situation. But many enjoy the creativity, perceived beauty, and success. Geniuses can harm themselves by having unrealistic expectations of their goals. But most of the harm comes from outside, as a dismissal of their work, or lack of material and moral support, baseless criticism. Nowadays, one demagogue can use the mass communication media to poison the whole population with rage against the representatives of national culture.

As the international communication and exchange of ideas grows, geniuses become more important than ever before. Education is necessary to overcome economical problems; new inventions create new jobs. But a genius provokes the anger of people, not by his behavior, but by his skills.

2.4. Should every child become a celebrity?

I believe in diversity in education. I am not criticizing teachers for not doing things my way. There are many other attempts to improve education. But I think it is now possible to aim even higher, to bring up geniuses. I can imagine the following environments where this could be done:

* Homeschooling, i.e. teaching your biological or adopted children. Multiple families could cooperate and share their skills.

* Specialized educational facility for geniuses; a college or a family-type institution.

Homeschooling, or private education with parental oversight, are the ancient methods for bringing up geniuses. Families should get more involved in education; you can't simply outsource everything to a school. We should support families willing to take an active role. Education works better in a loving environment.

Instead of trying to a find a talent, develop one. Start specializing early, at the age of 3 or 4. One cannot become an expert on everything.

My daughters played chess 5 or 6 hours a day since their age of 4 or 5. Similarly, if you want ot become a musician, spend 5 or 6 hours a day doing music; if a physicist, do physics; if a linguist, do languages. With such intense instruction, the child will soon feel the knowledge, experience success, and soon becomes able to use this knowledge independently. For example, after learning Esperanto 5 or 6 hours a day for a few months, the child can start corresponding with children from other countries, participate at international meet-ups, and experience the conversations in a foreign language. That is at the same time pleasant, useful for the child, and useful for the society. The next year, start with English, then German, etc. Now the child enjoys this, because it obviously makes sense. (Unlike at school, where most learning feels purposeless.) In chess, the first year makes you an average player, three years a great player, six years a master, fifteen years a grandmaster. When a 10-years old child surpasses an average adult at some skill, it is highly motivating.

Gifted children need financial support, to cover the costs of books, education, and travel.

Some people express concern that early specialization may lead to ignorance of everything else. But it's the other way round; abilities formed in one area can transfer to other areas. One learns how to learn.

Also, the specialization is relative. If you want to become e.g. a computer programmer, you will learn maths, informatics, foreign languages; when you become famous, you will travel, meet interesting people, experience different cultures. My daughters, in addition to being chess geniuses, speak many foreign languages, travel, do sports, write books, etc. Having deep knowledge about something doesn't imply ignorance about everything else. On the other hand, a misguided attempt to become an universalist can result in knowing nothing, in mere pretend-knowledge of everything.

Emotional and moral education must do together with the early specialization, to develop a complex personality. We wanted our children to be enthusiastic, courageous, persistent, to be objective judges of things and people, to resist failure and avoid temptations of success, to handle frustration and tolerate criticism even when it is wrong, to make plans, to manage their emotions. Also, to love and respect people, and to prefer creative work to physical pleasure or status symbols. We told them that they can achieve greatness, but that there can be only one world champion, so their goal should rather be to become good chess players, be good at sport, and be honest people.

Pedagogy puts great emphasis on being with children of the same age. I think that mental peers are more important than age peers. It would harm a gifted child to be forced to spend most of their time exclusively among children of the same age. On the other hand, spending most of the time with adults brings the risk that the child will learn to rely on them all the time, losing independence and initiative. You need to find a balance. I believe the best company would be of similar intellectual level, similar hobbies, and good relations.

For example, if Susan at 13 years old would be forced to play chess exclusively with 13 years old children, it would harm both sides. She could not learn anything from them; they would resent losing constantly.

Originally, I hoped I could bring up each daughter as a genius in a different field (e.g. mathematics, chess, music). It would be a more convincing evidence that you can bring up a genius of any kind. And I believe I would have succeeded, but I was constrained by money and time. We would need three private teachers, would have to go each day to three different places, would have to buy books for maths and chess and music (and the music instruments). By making them one team, things became easier, and the family has more things in common. Some psychologists worried that children could be jealous of each other, and hate each other. But we brought them up properly, and this did not happen.

This is how I imagine a typical day at a school for geniuses:

* 4 hours studying the subject of specialization, e.g. chess;

* 1 hour studying a foreign language; Esperanto at the first year, English at the second, later choose freely; during the first three months this would increase to 3 hours a day (by reducing the subject of specialization temporarily); traveling abroad during the summer;

* 1 hour computer science;

* 1 hour ethics, psychology, pedagogy, social skills;

* 1 hour physical education, specific form chosen individually.

Would I like to teach at such school? In theory yes, but in practice I am already burned out from the endless debates with authorities, the press, opinionated pedagogues and psychologists. I am really tired of that. The teachers in such school need to be protected from all this, so they can fully focus on their work.

2.5. Esperanto: the first step in learning foreign languages

Our whole family speaks Esperanto. It is a part of our moral system, a tool for equality of people. There are many prejudices against it, but the same was true about all progressive ideas. Some people argue by Bible that multiple languages are God's punishment we have to endure. Some people invested many resources into learning 2 or 3 or 4 foreign languages, and don't want to lose the gained position. Economically strong nations enforce their own languages as part of dominance, and the speakers of other languages are discriminated against. Using Esperanto as everyone's second language would make the international communication more easy and egalitarian. But considering today's economical pressures, it makes sense to learn English or Russian or Chinese next.

Esperanto has a regular grammar with simple syntax. It also uses many Latin, Germanic, and Slavic roots, so as a European, even if you are not familiar with the language, you will probably recognize many words in a text. This is an advantage from pedagogical point of view: you can more easily learn its vocabulary and its grammar; you can learn the whole language about 10 times easier than other languages.

It makes a great example of the concept of a foreign language, which pays off when learning other languages later. It is known that having learned one foreign language makes learning another foreign language easier. So, if learning Esperanto takes 10 times less time than learning another language, such as English, then if already knowing another foreign language makes learning the second one at least 10% more efficient, it makes sense to learn Esperanto first. Also, Esperanto would be a great first experience for students who have difficulty learning languages; they would achieve success faster.

3. Chess

3.1. Why chess?

Originally, we were deciding between mathematics, chess, and foreign languages. Finally we chose chess, because the results in that area are easy to measure, using a traditional and objective system, which makes it easier to prove whether the experiment succeeded or failed. Which was a lucky choice in hindsight, because back then we had no idea how many obstacles we will have to face. If we wouldn't be able to prove our results unambiguously, the attacks against us would have been much stronger.

Chess seemed sufficiently complex (it is a game, a science, an art, and a sport at the same time), so the risks of overspecialization were smaller; even if children would later decide they are tired of chess, they would keep some transferable skills. And the fact that our children were girls was a bonus: we were able to also prove that girls can be as intellectually able as boys; but for this purpose we needed an indisputable proof. (Although, people try to discount this proof anyway, saying things like: "Well, chess is simple, but try doing the same in languages, mathematics, or music!")

The scientific aspect of chess is that you have to follow the rules, analyze the situation, apply your intuition. If you have a favorite hypothesis, for example a favorite opening, but you keep losing, you have to change your mind. There is an aesthetic dimension in chess; some games are published and enjoyed not just because of their impressive logic, but because they are beautiful in some sense, they do something unexpected. And most people are not familiar with this chess requires great physical health. All the best chess players do some sport, and it is not a coincidence. Also it is organized similarly to sports: it has tournaments, players, spectators; you have to deal with the pain of losing, you have to play fair, etc.

3.2. How did the Polgár sisters start learning chess?

I don't have a "one weird trick" to teach children chess; it's just my general pedagogical approach, applied to chess. Teach the chess with love, playfully. Don't push it too forcefully. Remember to let the child win most of the time. Explain to the child that things can be learned, and that this also applies to chess. Don't worry if the child keeps jumping during the game; it could be still thinking about the game. Don't explain everything; provide the child an opportunity to discover some things independently. Don't criticize failure, praise success.

Start with shorter lessons, only 30 minutes and then have a break. Start by solving simple problems. Our girls loved the "checkmate in two/three moves" puzzles. Let the child play against equally skilled opponents often. For a child, it is better to play many quick games (e.g. with 5-minute timers), than a few long ones. Participate in tournaments appropriate for the child's current skill.

We have a large library of different games. They are indexed by strategy, and by names of players. So the girls can research their opponent's play before the tournament.

When a child loses the tournament, don't criticize them; the child is already sad. Offer support; help them analyze the mistakes.

When my girls write articles about chess, it makes them think deeply about the issue.

All three parts of the game opening, middle game, ending require same amount of focus. Some people focus too much on the endings, and neglect the rest. But at tournament, a bad opening can ruin the whole game.

Susan had the most difficult situation of the three daughters. In hindsight, having her learn 7 or 8 foreign languages was probably too much; some of that time would be better spent further improving her chess skills. As the oldest one, she also faced the worst criticism from haters; as a consequence she became the most defensive player of them. The two younger sister had the advantage that they could oppose the same pressures together. But still, I am sure that without those pressures, they also could have progressed even faster.

Politicians influenced the decisions of the Hungarian Chess Association; as a result my daughters were often forbidden from participation at international youth competitions, despite being the best national players. They wanted to prevent Susan from becoming the worldwide #1 female chess player. Once they even "donated" 100 points to her competitor, to keep Susan at the 2nd place. Later they didn't allow her to participate in the international male tournaments, although her results in the Hungarian male tournaments qualified her for that. The government regularly refused to issue passports to us, claiming that "our foreign travels hurt the public order". Also, it was difficult to find a trainer for my daughters, despite them being at the top of world rankings. Only recently we received a foreign help; a patron from Netherlands offered to pay trainers and sparring partners for my daughters, and also bought Susan a personal computer. A German journalist gave us a program and a database, and taught children how to use it.

The Hungarian press kept attacking us, published fake facts. We filed a few lawsuits, and won them all, but it just distracted us from our work. The foreign press whether writing from the chess, psychological, or pedagogical perspectives was fair to us; they wrote almost 40 000 articles about us, so finally even the Hungarian chess players, psychologists and pedagogues could learn about us from them.

At the beginning, I was a father, a trainer, and a manager to my daughters. But I am completely underqualified to be their trainer these days, so I just manage their trainers.

Until recently no one believed women could play chess on level comparable with men. Now the three girls together have about 40 Guiness records; they repeatedly outperformed their former records. In a 1988 interview Karpov said: "Susan is extraordinarily strong, but Judit... at such age, neither me nor Kasparov could play like Judit plays."

3.3. How can we make our children like chess?

Some tips for teaching chess to 4 or 5 years old children. First, I made a blank square divided into 8x8 little squares, with named rows and columns. I named a square, my daughter had to find it; then she named a square and I had to find it. Then we used the black-and-white version, and we were guessing the color of the named square without looking.

Then we introduced kings, in a "king vs king" combat; the task was to reach the opposing row of the board with your king. Then we added a pawn; the goal remained to reach the opposing row. After a month of playing, we introduced the queen, and the concept of checkmate. Later we gradually added the remaining pieces (knights were the most difficult).

Then we solved about thousand "checkmate in one move" puzzles. Then two moves, three moves, four moves. That took another 3 or 4 months. And only afterwards we started really playing against each other.

To provide an advantage for the child, don't play with less pieces, because that changes the structure of the game. Instead, provide yourself a very short time limit, or deliberately make a mistake, so the child can learn to notice them.

Have patience, if some phase takes a lot of time. On stronger fundamentals, you can later build better. This is where I think our educational system makes great mistakes. Schools don't teach intensely, so children keep forgetting most of what they learned during the long spaces between the lessons. And then, despite not having fully mastered the first step, they move to the second one, etc.

3.4. Chess and psychology

Competitive chess helps develop personality: will, emotion, perseverance, self-discipline, focus, self-control. It develops intellectual skills: memory, combination skills, logic, proper use of intuition. Understanding your opponent's weakness will help you.

People overestimate how much IQ tests determine talent. Measurements of people talented in different areas show that their average is only a bit above the average of the population.

3.5. Emancipation of women

Some people say, incorrectly, that my daughter won the male chess championship. But there is officially no such thing as "male chess championship", there is simply chess championship, open to both men and women. (And then, there is a separate female chess championship, only for women, but that is considered second league.)

I prepared the plan for my children before they were born. I didn't know I would have all girls, so I did not expect this special problem: the discrimination of women. I wanted to bring up my daughter Susan exactly according to the plan, but many people tried to prevent it; they insisted that she cannot compete with boys, that she should only compete with girls. Thus my original goal of proving that you can bring up a genius, became indirectly a goal of proving that there are no essential intellectual differences between men and women, and therefore one can't use that argument as an excuse for subjugation of women.

People kept telling me that I can only bring up Susan to be a female champion, not to compete with men. But I knew that during elementary school, girls can compete with boys. Only later, when they start playing the female role, when they are taught to clean the house, wash laundry, cook, follow the fashion, pay attention to details of clothing, and try getting married as soon as possible when they are expected to do other things than boys are expected to do that has a negative impact on developing their skills. But family duties and bringing up children can be done by both parents together.

Women can achieve same results, if they can get similar conditions. I tried to do that for my daughters, but I couldn't convince the whole society to treat them the same.

We know about differences between adult men and women, but we don't know whether they were caused by biology or education. And we know than e.g. in mathematics and languages, during elementary and high schools girls progress at the same pace as boys, and only later the differences appear. This is an evidence in favor of equality. We do not know what children growing up without discrimination would be like.

On the other hand, the current system also provides some advantages for women; for example the female chess players don't need to work that hard to become the (female) elite, and some of them don't want to give that up. Such women are among the greatest opponents of my daughters.

4. The meaning of this whole affair

4.1. Family value

I am certain that without a good family background the success of my daughters would not be possible. It is important, before people marry, to have a clear idea of what expect from their marriage. When partners cooperate, the mutual help, the shared experiences, education of children, good habits, etc. can deepen their love. Children need family without conflicts to feel safe. But of course, if the situation becomes too bad, the divorce might become the way to reduce conflicts.

To bring up a genius, it is desirable for one parent to stay at home and take care of children. But it can be the father, too.

[Klára Polgár says:] When I met László, my first impression was that he was an interesting person full of ideas, but one should not believe even half of them.

When Susan was three and half, László said it was time for her to specialize. She was good at math; at the age of four she already learned the material of the first four grades. Once she found chess figures in the box, and started playing with them as toys. László was spending a lot of time with her, and one day I was surprised to see them playing chess. László loved chess, but I never learned it.

So, we could have chosen math or foreign languages, but we felt that Susan was really happy playing chess, and she started being good at it. But our parents and neighbors shook their heads: "Chess? For a girl?" People told me: "What kind of a mother are you? Why do you allow your husband to play chess with Susan?" I had my doubts, but now I believe I made the right choice.

People are concerned whether my children had real childhood. I think they are at least as happy as their peers, probably more.

I always wanted to have a good, peaceful family life, and I believe I have achieved that. [End of Klára's part.]

4.2. Being a minority

It is generally known that Jewish people achieved many excellent results in intellectual fields. Some ask whether the cause of this is biologic or social. I believe it is social.

First, Jewish families are usually traditional, stable, and care a lot about education. They knew that they will be discriminated against, and will have to work twice as hard, and that at any moment they may be forced to leave their home, or even country, so their knowledge might be the only thing they will always be able to keep. Jewish religion requires parents to educate their children since early childhood; Talmud requires parents to become the child's first teachers.

4.3. Witnesses of the genius education: the happy children

I care about happiness of my children. But not only I want to make them happy, I also want to develop their ability to be happy. And I think that being a genius is the most certain way. The life of a genius may be difficult, but happy anyway. On the other hand, average people, despite seemingly playing it safe, often become alcoholics, drug addicts, neurotics, loners, etc.

Some geniuses become unhappy with their profession. But even then I believe it is easier for a genius to change professions.

Happiness = work + love + freedom + luck

People worry whether child geniuses don't lose their childhood. But the average childhood is actually not as great as people describe it; many people do not have a happy childhood. Parents want to make their children happy, but they often do it wrong: they buy them expensive toys, but they don't prepare them for life; they outsource that responsibility to school, which generally does not have the right conditions.

And when parents try to fully develop the capabilities of their children, instead of social support they usually get criticism. People will blame them for being overly ambitious, for pushing the children to achieve things they themselves failed at. I personally know people who tried to educate their children similarly to how we did, but the press launched a full-scale attack against them, and they gave up.

My daughters' lives are full of variety. They have met famous people: presidents, prime ministers, ambassadors, princess Diana, millionaires, mayors, UN delegates, famous artists, other olympic winners. They appeared in television, radio, newspapers. They traveled around the whole world; visited dozens of famous places. They have hobbies. They have friends in many parts of the world. And our house is always open to guests.

4.4. Make your life an ethical model

People reading this text may be surprised that they expected a rational explanation, while I mention emotions and morality a lot. But those are necessary for good life. Everyone should try to improve themselves in these aspects. The reason why I did not give up, despite all the obstacles and malice, is because for me, to live morally and create good, is an internal law. I couldn't do otherwise. I already know that even writing this very book will initiate more attacks, but I am doing it regardless.

And morality is also a thing we are not born with, but which needs to be taught to us, preferably in infancy. And we need to think about it, instead of expecting it to just happen. And the schools fail in this, too. I see it as an integral part of bringing up a genius.

One should aim to be a paragon; to live in a way that will make others want to follow you. Learn and work a lot; expect a lot from yourself and from others. Give love, and receive love. Live in peace with yourself and your neighbors. Work hard to be happy, and to make other people happy. Be a humanist, fight against prejudice. Protect the peace of the family, bring up your children towards perfection. Be honest. Respect freedom of yourself and of the others. Trust humanity; support the communities small and large. Etc.

(The book finishes by listing the achievements of the Polgár sisters, and by their various photos: playing chess, doing sports. I'll simply link their Wikipedia pages: Susan, Sofia, Judit. I hope you enjoyed reading this experimental translation; and if you think I omitted something important, feel free to add the missing parts in the comments. Note: I do believe that this book is generally correct and useful, but that doesn't mean I necessarily agree with every single detail. The opinions expressed here belong to the author; of course, unless some of them got impaired by my hasty translation.)

View more: Next