Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

16 types of useful predictions

77 Julia_Galef 10 April 2015 03:31AM

How often do you make predictions (either about future events, or about information that you don't yet have)? If you're a regular Less Wrong reader you're probably familiar with the idea that you should make your beliefs pay rent by saying, "Here's what I expect to see if my belief is correct, and here's how confident I am," and that you should then update your beliefs accordingly, depending on how your predictions turn out.

And yet… my impression is that few of us actually make predictions on a regular basis. Certainly, for me, there has always been a gap between how useful I think predictions are, in theory, and how often I make them.

I don't think this is just laziness. I think it's simply not a trivial task to find predictions to make that will help you improve your models of a domain you care about.

At this point I should clarify that there are two main goals predictions can help with:

  1. Improved Calibration (e.g., realizing that I'm only correct about Domain X 70% of the time, not 90% of the time as I had mistakenly thought). 
  2. Improved Accuracy (e.g., going from being correct in Domain X 70% of the time to being correct 90% of the time)

If your goal is just to become better calibrated in general, it doesn't much matter what kinds of predictions you make. So calibration exercises typically grab questions with easily obtainable answers, like "How tall is Mount Everest?" or  "Will Don Draper die before the end of Mad Men?" See, for example, the Credence Game, Prediction Book, and this recent post. And calibration training really does work.

But even though making predictions about trivia will improve my general calibration skill, it won't help me improve my models of the world. That is, it won't help me become more accurate, at least not in any domains I care about. If I answer a lot of questions about the heights of mountains, I might become more accurate about that topic, but that's not very helpful to me.

So I think the difficulty in prediction-making is this: The set {questions whose answers you can easily look up, or otherwise obtain} is a small subset of all possible questions. And the set {questions whose answers I care about} is also a small subset of all possible questions. And the intersection between those two subsets is much smaller still, and not easily identifiable. As a result, prediction-making tends to seem too effortful, or not fruitful enough to justify the effort it requires.

But the intersection's not empty. It just requires some strategic thought to determine which answerable questions have some bearing on issues you care about, or -- approaching the problem from the opposite direction -- how to take issues you care about and turn them into answerable questions.

I've been making a concerted effort to hunt for members of that intersection. Here are 16 types of predictions that I personally use to improve my judgment on issues I care about. (I'm sure there are plenty more, though, and hope you'll share your own as well.)

  1. Predict how long a task will take you. This one's a given, considering how common and impactful the planning fallacy is. 
    Examples: "How long will it take to write this blog post?" "How long until our company's profitable?"
  2. Predict how you'll feel in an upcoming situation. Affective forecasting – our ability to predict how we'll feel – has some well known flaws. 
    Examples: "How much will I enjoy this party?" "Will I feel better if I leave the house?" "If I don't get this job, will I still feel bad about it two weeks later?"
  3. Predict your performance on a task or goal. 
    One thing this helps me notice is when I've been trying the same kind of approach repeatedly without success. Even just the act of making the prediction can spark the realization that I need a better game plan.
    Examples: "Will I stick to my workout plan for at least a month?" "How well will this event I'm organizing go?" "How much work will I get done today?" "Can I successfully convince Bob of my opinion on this issue?" 
  4. Predict how your audience will react to a particular social media post (on Facebook, Twitter, Tumblr, a blog, etc.).
    This is a good way to hone your judgment about how to create successful content, as well as your understanding of your friends' (or readers') personalities and worldviews.
    Examples: "Will this video get an unusually high number of likes?" "Will linking to this article spark a fight in the comments?" 
  5. When you try a new activity or technique, predict how much value you'll get out of it.
    I've noticed I tend to be inaccurate in both directions in this domain. There are certain kinds of life hacks I feel sure are going to solve all my problems (and they rarely do). Conversely, I am overly skeptical of activities that are outside my comfort zone, and often end up pleasantly surprised once I try them.
    Examples: "How much will Pomodoros boost my productivity?" "How much will I enjoy swing dancing?"
  6. When you make a purchase, predict how much value you'll get out of it.
    Research on money and happiness shows two main things: (1) as a general rule, money doesn't buy happiness, but also that (2) there are a bunch of exceptions to this rule. So there seems to be lots of potential to improve your prediction skill here, and spend your money more effectively than the average person.
    Examples: "How much will I wear these new shoes?" "How often will I use my club membership?" "In two months, will I think it was worth it to have repainted the kitchen?" "In two months, will I feel that I'm still getting pleasure from my new car?"
  7. Predict how someone will answer a question about themselves.
    I often notice assumptions I'm been making about other people, and I like to check those assumptions when I can. Ideally I get interesting feedback both about the object-level question, and about my overall model of the person.
    Examples: "Does it bother you when our meetings run over the scheduled time?" "Did you consider yourself popular in high school?" "Do you think it's okay to lie in order to protect someone's feelings?"
  8. Predict how much progress you can make on a problem in five minutes.
    I often have the impression that a problem is intractable, or that I've already worked on it and have considered all of the obvious solutions. But then when I decide (or when someone prompts me) to sit down and brainstorm for five minutes, I am surprised to come away with a promising new approach to the problem.  
    Example: "I feel like I've tried everything to fix my sleep, and nothing works. If I sit down now and spend five minutes thinking, will I be able to generate at least one new idea that's promising enough to try?"
  9. Predict whether the data in your memory supports your impression.
    Memory is awfully fallible, and I have been surprised at how often I am unable to generate specific examples to support a confident impression of mine (or how often the specific examples I generate actually contradict my impression).
    Examples: "I have the impression that people who leave academia tend to be glad they did. If I try to list a bunch of the people I know who left academia, and how happy they are, what will the approximate ratio of happy/unhappy people be?"
    "It feels like Bob never takes my advice. If I sit down and try to think of examples of Bob taking my advice, how many will I be able to come up with?" 
  10. Pick one expert source and predict how they will answer a question.
    This is a quick shortcut to testing a claim or settling a dispute.
    Examples: "Will Cochrane Medical support the claim that Vitamin D promotes hair growth?" "Will Bob, who has run several companies like ours, agree that our starting salary is too low?" 
  11. When you meet someone new, take note of your first impressions of him. Predict how likely it is that, once you've gotten to know him better, you will consider your first impressions of him to have been accurate.
    A variant of this one, suggested to me by CFAR alum Lauren Lee, is to make predictions about someone before you meet him, based on what you know about him ahead of time.
    Examples: "All I know about this guy I'm about to meet is that he's a banker; I'm moderately confident that he'll seem cocky." "Based on the one conversation I've had with Lisa, she seems really insightful – I predict that I'll still have that impression of her once I know her better."
  12. Predict how your Facebook friends will respond to a poll.
    Examples: I often post social etiquette questions on Facebook. For example, I recently did a poll asking, "If a conversation is going awkwardly, does it make things better or worse for the other person to comment on the awkwardness?" I confidently predicted most people would say "worse," and I was wrong.
  13. Predict how well you understand someone's position by trying to paraphrase it back to him.
    The illusion of transparency is pernicious.
    Examples: "You said you think running a workshop next month is a bad idea; I'm guessing you think that's because we don't have enough time to advertise, is that correct?"
    "I know you think eating meat is morally unproblematic; is that because you think that animals don't suffer?"
  14. When you have a disagreement with someone, predict how likely it is that a neutral third party will side with you after the issue is explained to her.
    For best results, don't reveal which of you is on which side when you're explaining the issue to your arbiter.
    Example: "So, at work today, Bob and I disagreed about whether it's appropriate for interns to attend hiring meetings; what do you think?"
  15. Predict whether a surprising piece of news will turn out to be true.
    This is a good way to hone your bullshit detector and improve your overall "common sense" models of the world.
    Examples: "This headline says some scientists uploaded a worm's brain -- after I read the article, will the headline seem like an accurate representation of what really happened?"
    "This viral video purports to show strangers being prompted to kiss; will it turn out to have been staged?"
  16. Predict whether a quick online search will turn up any credible sources supporting a particular claim.
    Example: "Bob says that watches always stop working shortly after he puts them on – if I spend a few minutes searching online, will I be able to find any credible sources saying that this is a real phenomenon?"

I have one additional, general thought on how to get the most out of predictions:

Rationalists tend to focus on the importance of objective metrics. And as you may have noticed, a lot of the examples I listed above fail that criterion. For example, "Predict whether a fight will break out in the comments? Well, there's no objective way to say whether something officially counts as a 'fight' or not…" Or, "Predict whether I'll be able to find credible sources supporting X? Well, who's to say what a credible source is, and what counts as 'supporting' X?"

And indeed, objective metrics are preferable, all else equal. But all else isn't equal. Subjective metrics are much easier to generate, and they're far from useless. Most of the time it will be clear enough, once you see the results, whether your prediction basically came true or not -- even if you haven't pinned down a precise, objectively measurable success criterion ahead of time. Usually the result will be a common sense "yes," or a common sense "no." And sometimes it'll be "um...sort of?", but that can be an interestingly surprising result too, if you had strongly predicted the results would point clearly one way or the other. 

Along similar lines, I usually don't assign numerical probabilities to my predictions. I just take note of where my confidence falls on a qualitative "very confident," "pretty confident," "weakly confident" scale (which might correspond to something like 90%/75%/60% probabilities, if I had to put numbers on it).

There's probably some additional value you can extract by writing down quantitative confidence levels, and by devising objective metrics that are impossible to game, rather than just relying on your subjective impressions. But in most cases I don't think that additional value is worth the cost you incur from turning predictions into an onerous task. In other words, don't let the perfect be the enemy of the good. Or in other other words: the biggest problem with your predictions right now is that they don't exist.

Rationality: From AI to Zombies

74 RobbBB 13 March 2015 03:11PM

 

Eliezer Yudkowsky's original Sequences have been edited, reordered, and converted into an ebook!

Rationality: From AI to Zombies is now available in PDF, EPUB, and MOBI versions on intelligence.org (link). You can choose your own price to pay for it (minimum $0.00), or buy it for $4.99 from Amazon (link). The contents are:

  • 333 essays from Eliezer's 2006-2009 writings on Overcoming Bias and Less Wrong, including 58 posts that were not originally included in a named sequence.
  • 5 supplemental essays from yudkowsky.net, written between 2003 and 2008.
  • 6 new introductions by me, spaced throughout the book, plus a short preface by Eliezer.

The ebook's release has been timed to coincide with the end of Eliezer's other well-known introduction to rationality, Harry Potter and the Methods of Rationality. The two share many similar themes, and although Rationality: From AI to Zombies is (mostly) nonfiction, it is decidedly unconventional nonfiction, freely drifting in style from cryptic allegory to personal vignette to impassioned manifesto.

The 333 posts have been reorganized into twenty-six sequences, lettered A through Z. In order, these are titled:

  • A — Predictably Wrong
  • B — Fake Beliefs
  • C — Noticing Confusion
  • D — Mysterious Answers
  • E — Overly Convenient Excuses
  • F — Politics and Rationality
  • G — Against Rationalization
  • H — Against Doublethink
  • I — Seeing with Fresh Eyes
  • J — Death Spirals
  • K — Letting Go
  • L — The Simple Math of Evolution
  • M — Fragile Purposes
  • N — A Human's Guide to Words
  • O — Lawful Truth
  • P — Reductionism 101
  • Q — Joy in the Merely Real
  • R — Physicalism 201
  • S — Quantum Physics and Many Worlds
  • T — Science and Rationality
  • U — Fake Preferences
  • V — Value Theory
  • W — Quantified Humanism
  • X — Yudkowsky's Coming of Age
  • Y — Challenging the Difficult
  • Z — The Craft and the Community

Several sequences and posts have been renamed, so you'll need to consult the ebook's table of contents to spot all the correspondences. Four of these sequences (marked in bold) are almost completely new. They were written at the same time as Eliezer's other Overcoming Bias posts, but were never ordered or grouped together. Some of the others (A, C, L, S, V, Y, Z) have been substantially expanded, shrunk, or rearranged, but are still based largely on old content from the Sequences.

One of the most common complaints about the old Sequences was that there was no canonical default order, especially for people who didn't want to read the entire blog archive chronologically. Despite being called "sequences," their structure looked more like a complicated, looping web than like a line. With Rationality: From AI to Zombies, it will still be possible to hop back and forth between different parts of the book, but this will no longer be required for basic comprehension. The contents have been reviewed for consistency and in-context continuity, so that they can genuinely be read in sequence. You can simply read the book as a book.

I have also created a community-edited Glossary for Rationality: From AI to Zombies. You're invited to improve on the definitions and explanations there, and add new ones if you think of any while reading. When we release print versions of the ebook (as a six-volume set), a future version of the Glossary will probably be included.

Announcing the Complice Less Wrong Study Hall

49 malcolmocean 02 March 2015 11:37PM

(If you're familiar with the backstory of the LWSH, you can skip to paragraph 5. If you just want the link to the chat, click here: LWSH on Complice)

The Less Wrong Study Hall was created as a tinychat room in March 2013, following Mqrius and ShannonFriedman's desire to create a virtual context for productivity. In retrospect, I think it's hilarious that a bunch of the comments ended up being a discussion of whether LW had the numbers to get a room that consistently had someone in it. The funny part is that they were based around the assumption that people would spend about 1h/day in it.

Once it was created, it was so effective that people started spending their entire day doing pomodoros (with 32minsWork+8minsBreak) in the LWSH and now often even stay logged in while doing chores away from their computers, just for cadence of focus and the sense of company. So there's almost always someone there, and often 5-10 people.

A week in, a call was put out for volunteers to program a replacement for the much-maligned tinychat. As it turns out though, video chat is a hard problem.

So nearly 2 years later, people are still using the tinychat.

But a few weeks ago, I discovered that you can embed the tinychat applet into an arbitrary page. I immediately set out to integrate LWSH into Complice, the productivity app I've been building for over a year, which counts many rationalists among its alpha & beta users.

The focal point of Complice is its today page, which consists of a list of everything you're planning to accomplish that day, colorized by goal. Plus a pomodoro timer. My habit for a long time has been to have this open next to LWSH. So what I basically did was integrate these two pages. On the left, you have a list of your own tasks. On the right, a list of other users in the room, with whatever task they're doing next. Then below all of that, the chatroom.

(Something important to note: I'm not planning to point existing Complice users, who may not be LWers, at the LW Study Hall. Any Complice user can create their own coworking room by going to complice.co/createroom)

With this integration, I've solved many of the core problems that people wanted addressed for the study hall:

  • an actual ding sound beyond people typing in the chat
  • synchronized pomodoro time visibility
  • pomos that automatically start, so breaks don't run over
  • Intentions — what am I working on this pomo?
  • a list of what other users are working on
  • the ability to show off how many pomos you've done
  • better welcoming & explanation of group norms

There are a couple other requested features that I can definitely solve but decided could come after this launch:

  • rooms with different pomodoro durations
  • member profiles
  • the ability to precommit to showing up at a certain time (maybe through Beeminder?!)

The following points were brought up in the Programming the LW Study Hall post or on the List of desired features on the github/nnmm/lwsh wiki, but can't be fixed without replacing tinychat:

  • efficient with respect to bandwidth and CPU
  • page layout with videos lined up down the left for use on the side of monitors
  • chat history
  • encryption
  • everything else that generally sucks about tinychat

It's also worth noting that if you were to think of the entirety of Complice as an addition to LWSH... well, it would definitely look like feature creep, but at any rate there would be several other notable improvements:

  • daily emails prompting you to decide what you're going to do that day
  • a historical record of what you've done, with guided weekly, monthly, and yearly reviews
  • optional accountability partner who gets emails with what you've done every day (the LWSH might be a great place to find partners!)
So, if you haven't clicked the link already, check out: complice.co/room/lesswrong

(This article posted to Main because that's where the rest of the LWSH posts are, and this represents a substantial update.)

Don't Be Afraid of Asking Personally Important Questions of Less Wrong

46 Evan_Gaensbauer 17 March 2015 06:54AM

Related: LessWrong as a social catalyst

I primarily used my prior user profile asked questions of Less Wrong. When I had an inkling for a query, but I didn't have a fully formed hypothesis, I wouldn't know how to search for answers to questions on the Internet myself, so I asked them on Less Wrong.

The reception I have received has been mostly positive. Here are some examples:

  • Back when I was trying to figure out which college major to pursue, I queried Less Wrong about which one was worth my effort. I followed this up with a discussion about whether it was worthwhile for me to personally, and for someone in general, to pursue graduate studies.


Other student users of Less Wrong benefit from the insight of their careered peers:

  • A friend of mine was considering pursuing medicine to earn to give. In the same vein as my own discussion, I suggested he pose the question to Less Wrong. He didn't feel like it at first, so I posed the query on his behalf. In a few days, he received feedback which returned the conclusion that pursuing medical school through the avenues he was aiming for wasn't his best option relative to his other considerations. He showed up in the thread, and expressed his gratitude. The entirely of the online rationalist community was willing to respond provided valuable information for an important question. It might have taken him lots of time, attention, and effort to look for the answers to this question by himself.

In engaging with Less Wrong, with the rest of you, my experience has been that Less Wrong isn't just useful as an archive of blog posts, but is actively useful as a community of people. As weird as it may seem, you can generate positive externalities that improve the lives of others by merely writing a blog post. This extends to responding in the comments section too. Stupid Questions Threads are a great example of this; you can ask questions about your procedural knowledge gaps without fear of reprisal.  People have gotten great responses about getting more value out of conversations, to being more socially successful, to learning and appreciating music as an adult. Less Wrong may be one of few online communities for which even the comments sections are useful, by default.

For the above examples, even though they weren't the most popular discussions ever started, and likely didn't get as much traffic, it's because of the feedback they received that made them more personally valuable to one individual than several others.

At the CFAR workshop I attended, I was taught two relevant skills:

* Value of Information Calculations: formulating a question well, and performing a Fermi estimate, or back-of-the-envelope question, in an attempt to answer it, generates quantified insight you wouldn't have otherwise anticipated.

* Social Comfort Zone Expansion: humans tend to have a greater aversion to trying new things socially than is maximally effective, and one way of viscerally teaching System 1 this lesson is by trial-and-error of taking small risks. Posting on Less Wrong, especially, e.g., in a special thread, is really a low-risk action. The pang of losing karma can feel real, but losing karma really is a valuable signal that one should try again differently. Also, it's not as bad as failing at taking risks in meatspace.

When I've received downvotes for a comment, I interpret that as useful information, try to model what I did wrong, and thank others for correcting my confused thinking. If you're worried about writing something embarrassing, that's understandable, but realize it's a fact about your untested anticipations, not a fact about everyone else using Less Wrong. There are dozens of brilliant people with valuable insights at the ready, reading Less Wrong for fun, and who like helping us answer our own personal questions. Users shminux and Carl Shulman are exemplars of this.

This isn't an issue for all users, but I feel as if not enough users are taking advantage of the personal value they can get by asking more questions. This post is intended to encourage them. User Gunnar Zarnacke suggested that if enough examples of experiences like this were accrued, it could be transformed into some sort of repository of personal value from Less Wrong

Calibration Test with database of 150,000+ questions

34 Nanashi 14 March 2015 11:22AM

Hi all, 

I put this calibration test together this morning. It pulls from a trivia API of over 150,000 questions so you should be able to take this many, many times before you start seeing repeats.

http://www.2pih.com/caltest.php

A few notes:

1. The questions are "Jeopardy" style questions so the wording may be strange, and some of them might be impossible to answer without further context. On these just assign 0% confidence.

2. As the questions are open-ended, there is no answer-checking mechanism. You have to be honest with yourself as to whether or not you got the right answer. Because what would be the point of cheating at a calibration test?

I can't think of anything else. Please let me know if there are any features you would want to see added, or if there are any bugs, issues, etc. 

 

**EDIT**

As per suggestion I have moved this to the main section. Here are the changes I'll be making soon:

  • Label the axes and include an explanation of calibration curves.
  • Make it so you can reverse your last selection in the event of a misclick.

Here are changes I'll make eventually:

  • Create an account system so you can store your results online.
  • Move trivia DB over to my own server to allow for flagging of bad/unanswerable questions.

 

Here are the changes that are done:

  • Change 0% to 0.1% and 99% to 99.9%  
  • Added second graph which shows the frequency of your confidence selections. 
  • Color code the "right" and "wrong" buttons and make them farther apart to prevent misclicks.
  • Store your results locally so that you can see your calibration over time.
  • Check to see if a question is blank and skip if so.

New forum for MIRI research: Intelligent Agent Foundations Forum

33 orthonormal 20 March 2015 12:35AM

Today, the Machine Intelligence Research Institute is launching a new forum for research discussion: the Intelligent Agent Foundations Forum! It's already been seeded with a bunch of new work on MIRI topics from the last few months.

We've covered most of the (what, why, how) subjects on the forum's new welcome post and the How to Contribute page, but this post is an easy place to comment if you have further questions (or if, maths forbid, there are technical issues with the forum instead of on it).

But before that, go ahead and check it out!

(Major thanks to Benja Fallenstein, Alice Monday, and Elliott Jin for their work on the forum code, and to all the contributors so far!)

EDIT 3/22: Jessica Taylor, Benja Fallenstein, and I wrote forum digest posts summarizing and linking to recent work (on the IAFF and elsewhere) on reflective oracle machines, on corrigibility, utility indifference, and related control ideas, and on updateless decision theory and the logic of provability, respectively! These are pretty excellent resources for reading up on those topics, in my biased opinion.

Twenty basic rules for intelligent money management

30 James_Miller 19 March 2015 05:57PM

1.  Start investing early in life.
 

The power of compound interest means you will have much more money at retirement if you start investing early in your career.  For example, imagine that at age eighteen you invest $1,000 and earn an 8% return per year.  At age seventy you will have $54,706.  In contrast, if you make the same investment at age fifty you will have a paltry $4,661 when you turn seventy.
 
Many people who haven't saved for retirement panic upon reaching middle age.  So if you are young don't think that saving today will help you only when you retire, but know that such savings will give you greater peace of mind when you turn forty.
 
When evaluating  potential marriage partners give bonus points to those who have a history of saving.  Do this not because you want to marry into wealth, but because you should want to marry someone who has discipline, intelligence and foresight.
 
 

continue reading »

Is Scott Alexander bad at math?

28 JonahSinick 04 May 2015 05:11AM

This post is a third installment to the sequence that I started with The Truth About Mathematical Ability and Innate Mathematical Ability. I begin to discuss the role of aesthetics in math. 

There was strong interest in the first two posts in my sequence, and I apologize for the long delay. The reason for it is that I've accumulated hundreds of pages of relevant material in draft form, and have struggled with how to organize such a large body of material. I still don't know what's best, but since people have been asking, I decided to continue posting on the subject, even if I don't have my thoughts as organized as I'd like. I'd greatly welcome and appreciate any comments, but I won't have time to respond to them individually, because I already have my hands full with putting my hundreds of pages of writing in public form.

continue reading »

Desire is the direction, rationality is the magnitude

24 So8res 05 April 2015 05:27PM

What follows is a series of four short essays that say explicitly some things that I would tell an intrigued proto-rationalist before pointing them towards Rationality: AI to Zombies (and, by extension, most of LessWrong). For most people here, these essays will be very old news, as they talk about the insights that come even before the sequences. However, I've noticed recently that a number of fledgling rationalists haven't actually been exposed to all of these ideas, and there is power in saying the obvious.

This essay is cross-posted on MindingOurWay.


A brief note on "rationality:"

It's a common trope that thinking can be divided up into "hot, emotional thinking" and "cold, rational thinking" (with Kirk and Spock being the stereotypical offenders, respectively). The tropes say that the hot decisions are often stupid (and inconsiderate of consequences), while the cold decisions are often smart (but made by the sort of disconnected nerd that wears a lab coat and makes wacky technology). Of course (the trope goes) there are Deep Human Truths available to the hot reasoners that the cold reasoners know not.

Many people, upon encountering one who says they study the art of human rationality, jump to the conclusion that these "rationalists" are people who reject the hot reasoning entirely, attempting to disconnect themselves from their emotions once and for all, in order to avoid the rash mistakes of "hot reasoning." Many think that these aspiring rationalists are attempting some sort of dark ritual to sacrifice emotion once and for all, while failing to notice that the emotions they wish to sacrifice are the very things which give them their humanity. "Love is hot and rash and irrational," they say, "but you sure wouldn't want to sacrifice it." Understandably, many people find the prospect of "becoming more rational" rather uncomfortable.

So heads up: this sort of emotional sacrifice has little to do with the word "rationality" as it is used in Rationality: AI to Zombies.

When Rationality: AI to Zombies talks about "rationality," it's not talking about the "cold" part of hot vs cold reasoning, it's talking about the reasoning part.

One way or another, we humans are reasoning creatures. Sometimes, when time pressure is bearing down on us, we make quick decisions and follow our split-second intuitions. Sometimes, when the stakes are incredibly high and we have time available, we deploy the machinery of logic, in places where we trust it more than our impulses. But in both cases, we are reasoning. Whether our reasoning be hot or cold or otherwise, there are better and worse ways to reason.

(And, trust me, brains have found a whole lot of the bad ones. What do you expect, when you run programs that screwed themselves into existence on computers made of meat?)

The rationality of Rationality: AI to Zombies isn't about using cold logic to choose what to care about. Reasoning well has little to do with what you're reasoning towards. If your goal is to enjoy life to the fullest and love without restraint, then better reasoning (while hot or cold, while rushed or relaxed) will help you do so. But if your goal is to annihilate as many puppies as possible, then this-kind-of-rationality will also help you annihilate more puppies.

(Unfortunately, this usage of the word "rationality" does not match the colloquial usage. I wish we had a better word for the study of how to improve one's reasoning in all its forms that didn't also evoke images of people sacrificing their emotions on the altar of cold logic. But alas, that ship has sailed.)

If you are considering walking the path towards rationality-as-better-reasoning, then please, do not sacrifice your warmth. Your deepest desires are not a burden, but a compass. Rationality of this kind is not about changing where you're going, it's about changing how far you can go.

People often label their deepest desires "irrational." They say things like "I know it's irrational, but I love my partner, and if they were taken from me, I'd move heaven and earth to get them back." To which I say: when I point towards "rationality," I point not towards that which would rob you of your desires, but rather towards that which would make you better able to achieve them.

That is the sort of rationality that I suggest studying, when I recommend reading Rationality: AI to Zombies.

Sapiens

23 Vaniver 08 April 2015 02:56AM

 

This is a section-by-section summary and review of Sapiens: A Brief History of Humankind by Yuval Noah Harari. It's come up on Less Wrong before in the context of Death is Optional, a conversation the author had with Daniel Kahneman about the book, and seems like an accessible introduction to many of the concepts underlying the LW perspective on history and the future. Anyone who's thought about Moloch will find many of the same issues discussed here, and so I'll scatter links to Yvain throughout. I'll discuss several of the points that I thought were interesting and novel, or at least had a novel perspective and good presentation.

A history as expansive as this one necessarily involves operating on higher levels of abstraction. The first section expresses this concisely enough to quote in full:

About 13.5 billion years ago, matter, energy, time and space came into being in what is known as the Big Bang. The story of these fundamental features of our universe is called physics.

About 300,000 years after their appearance, matter and energy started to coalesce into complex structures, called atoms, which then combined into molecules. The story of atoms, molecules and their interactions is called chemistry.

About 3.8. billion years ago, on a planet called Earth, certain molecules combined to form particularly large and intricate structures called organisms. The story of organisms is called biology.

About 70,000 years ago, organisms belonging to the species Homo sapiens started to form even more elaborate structures called cultures. The subsequent development of these human cultures is called history.

Three important revolutions shaped the course of history: the Cognitive Revolution kick-started history about 70,000 years ago. The Agricultural Revolution sped it up about 12,000 years ago. The Scientific Revolution, which got under way only 500 years ago, may well end history and start something completely different. This book tells the story of how these three revolutions have affected humans and their fellow organisms.

continue reading »

View more: Next