Meetup : First Meetup- Cleveland and Akron, Ohio

3 [deleted] 28 October 2012 11:03PM

Discussion article for the meetup : First Meetup- Cleveland and Akron, Ohio

WHEN: 17 November 2012 08:03:00PM (-0400)

WHERE: Cleveland, OH

I'm posting on behalf of someone who got in touch with me on the Ohio listserv. He is looking for people active in the Cleveland or Akron area. Right now, OHLW is only active in Cincinnati, Dayton, and Columbus.

If you are in Cleveland or Akron, or attend school in the area, please leave a comment here, or PM me. We will set a date and location once we figure out who is interested.

Thank you!

Discussion article for the meetup : First Meetup- Cleveland and Akron, Ohio

The Useful Idea of Truth

77 Eliezer_Yudkowsky 02 October 2012 06:16PM

(This is the first post of a new Sequence, Highly Advanced Epistemology 101 for Beginners, setting up the Sequence Open Problems in Friendly AI.  For experienced readers, this first post may seem somewhat elementary; but it serves as a basis for what follows.  And though it may be conventional in standard philosophy, the world at large does not know it, and it is useful to know a compact explanation.  Kudos to Alex Altair for helping in the production and editing of this post and Sequence!)


I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.
-- Danielle Egan

I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.
-- Dale Carrico

What then is truth? A movable host of metaphors, metonymies, and; anthropomorphisms: in short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, and binding.
-- Friedrich Nietzche


The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:

  1. The child sees Sally hide a marble inside a covered basket, as Anne looks on.

  2. Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.

  3. Anne leaves the room, and Sally returns.

  4. The experimenter asks the child where Sally will look for her marble.

Children under the age of four say that Sally will look for her marble inside the box. Children over the age of four say that Sally will look for her marble inside the basket.

continue reading »

Biased Pandemic

56 freyley 13 March 2012 11:32PM

Recently, Portland Lesswrong played a game that was a perfect trifecta of: difficult mental exercise; fun; and an opportunity to learn about biases and recognize them in yourself and others. We're still perfecting it, and we'd welcome feedback, especially from people who try it.

The Short Version

The game is a combination of Pandemic, a cooperative board game that is cognitively demanding, and the idea of roleplaying cognitive biases. Our favorite way of playing it (so far), everyone selects a bias at random, and then attempts to exaggerate that bias in their arguments and decisions during the game. Everyone attempts to identify the biases in the other players, and, when a bias is guessed, the guessed player selects a new bias and begins again.

continue reading »

Meetup : Ohio Monthly

2 [deleted] 23 February 2012 11:14PM

Discussion article for the meetup : Ohio Monthly

WHEN: 17 March 2012 03:00:00PM (-0500)

WHERE: 123 Gano Road, Wilmington, OH 45177

Time for the third Ohio monthly meetup! As always, it will be the third Sunday (March 18), from 4p-8p. This month, we MAY have a different location, for Rolf to present his thesis work. Otherwise, we will be in the back room of Max and Erma's like usual. We have picked five Sequences to focus on. They are: Making Beliefs Pay Rent Belief in Belief Bayesian Judo Professing and Cheering Belief as Attire It's a good idea to at least skim these sequences, even if you've already read them. We will also probably continue conversation on: attracting and maintaining new people, meta discussion, how to make entrance to the LessWrong memeplex as painless as possible. As always, there are more frequent, but irregularly scheduled, LessWrong meetups in both Columbus and Cincinnati. Please join our mailing list to keep updated on those!

Discussion article for the meetup : Ohio Monthly

Longevity Insurance

20 canadaduane 20 February 2012 12:30AM

Let's say we (as a country) ban life insurance and health insurance as separate packages [1] and require them to be combined in something I'll call "Longevity Insurance".  The idea is that as a person/consumer, you can buy a "life expectancy" of 75 years, or 90 years, or whatever. In addition, you specify a maximum dollar amount that the longevity insurance will ever pay out--say, $2 million. If you have any medical issues throughout your life, up to the life expectancy threshold, the insurance plan will pay for your expenses. If it fails to keep you consciously alive for the duration of your "life expectancy", then upon your death, the policy guarantees that the company will pay the full remaining amount to your next of kin.

As an example, suppose you (let's say you're a woman) had purchased a 75-year policy, but you had a car accident.  The paramedics tried to save you, and the hospital bill came to $100k, but even after that noble effort, you still died. As a result, your husband and children get $1.9M. Alternatively, if in our hypothetical situation they succeed in resuscitating you, the company would keep the $1.9M for future medical bills, and, if they fulfill their promise of life expectancy, they pocket the remainder as profit on your 75th birthday.

It seems like this arrangement would put all of the right incentives [2] in place for both companies and individuals. Most individuals would want to avoid trivial medical expenses in order to maximize payout to family in case of accidental death. Companies would want to maximize health and longevity in order to profit from the end-of-life payout. And our society would have a way to rationally consider the value of life without resorting to arguments that essentially conclude "life is of infinite value," and in doing so, prevent sensible gerontological triage. To put it into perspective, it makes little sense that we spend $1M (as a society) trying to save a 92-year-old when that same amount could have saved 10 teenagers.

Longevity Insurance companies would be incentivized to become heavily involved in medical research that prevents disease, prolongs life, and keeps people healthy. I can imagine a whole array of things that make sense in this context. For example, it would be the right place to fund studies on genetics, it could be the right vehicle for getting 'free' immunizations, and it could even make public funding for "health insurance" easier to pass--simply set the bar low enough that everyone can agree on an age that society will extend a policy for. Do we all agree that everyone in our society should live to age 50? Super! The government will cover Longevity Insurance up to age 50.

[1] We could also just allow Longevity Insurance as a free-market alternative, but for the sake of argument, let's ban its competitors.

[2] The one incentive that Longevity Insurance does not seem to address well is the possibility of next-of-kin killing their loved one just prior to the end of an insurance policy. One option would be to require a one-year moratorium in the case where someone dies within a year of their policy ending. This would give time for an investigation before awarding large sums of money.

* crosspost from my blog, http://halfcupofsugar.com/longevity-insurance

 

My Algorithm for Beating Procrastination

81 lukeprog 10 February 2012 02:48AM

Part of the sequence: The Science of Winning at Life

After three months of practice, I now use a single algorithm to beat procrastination most of the times I face it.1 It probably won't work for you quite like it did for me, but it's the best advice on motivation I've got, and it's a major reason I'm known for having the "gets shit done" property. There are reasons to hope that we can eventually break the chain of akrasia; maybe this post is one baby step in the right direction.

How to Beat Procrastination explained our best current general theory of procrastination, called "temporal motivation theory" (TMT). As an exercise in practical advice backed by deep theories, this post explains the process I use to beat procrastination — a process implied by TMT.

As a reminder, here's a rough sketch of how motivation works according to TMT:

the procrastination equation

Or, as Piers Steel summarizes:

Decrease the certainty or the size of a task's reward — its expectancy or its value — and you are unlikely to pursue its completion with any vigor. Increase the delay for the task's reward and our susceptibility to delay — impulsiveness — and motivation also dips.

Of course, my motivation system is more complex than that. P.J. Eby likens TMT (as a guide for beating procrastination) to the "fuel, air, ignition, and compression" plan for starting your car: it might be true, but a more useful theory would include details and mechanism.

That's a fair criticism. Just as an fMRI captures the "big picture" of brain function at low resolution, TMT captures the big picture of motivation. This big picture helps us see where we need to work at the gears-and-circuits level, so we can become the goal-directed consequentialists we'd like to be.

So, I'll share my four-step algorithm below, and tackle the gears-and-circuits level in later posts.

continue reading »

Elevator pitches/responses for rationality / AI

17 lukeprog 02 February 2012 08:35PM

I'm trying to develop a large set of elevator pitches / elevator responses for the two major topics of LW: rationality and AI.

An elevator pitch lasts 20-60 seconds, and is not necessarily prompted by anything, or at most is prompted by something very vague like "So, I heard you talking about 'rationality'. What's that about?"

An elevator response is a 20-60 second, highly optimized response to a commonly heard sentence or idea, for example, "Science doesn't know everything."

 

Examples (but I hope you can improve upon them):

 

"So, I hear you care about rationality. What's that about?"

Well, we all have beliefs about the world, and we use those beliefs to make decisions that we think will bring us the most of what we want. What most people don't realize is that there is a mathematically optimal way to update your beliefs in response to evidence, and a mathematically optimal way to figure out which decision is most likely to bring you the most of what you want, and these methods are defined by probability theory and decision theory. Moreover, cognitive science has discovered a long list of predictable mistakes our brains make when forming beliefs and making decisions, and there are particular things we can do to improve our beliefs and decisions. [This is the abstract version; probably better to open with a concrete and vivid example.]

"Science doesn't know everything."

As the comedian Dara O'Briain once said, science knows it doesn't know everything, or else it'd stop. But just because science doesn't know everything doesn't mean you can use whatever theory most appeals to you. Anybody can do that, and use whatever crazy theory they want.

"But you can't expect people to act rationally. We are emotional creatures."

But of course. Expecting people to be rational is irrational. If you expect people to usually be rational, you're ignoring an enormous amount of evidence about how humans work.

"But sometimes you can't wait until you have all the information you need. Sometimes you need to act right away."

But of course. You have to weigh the cost of new information with the expected value of that new information. Sometimes it's best to just act on the best of what you know right now.

"But we have to use intuition sometimes. And sometimes, my intuitions are pretty good!"

But of course. We even have lots of data on which situations are conducive to intuitive judgment, and which ones are not. And sometimes, it's rational to use your intuition because it's the best you've got and you don't have time to write out a bunch of probability calculations.

"But I'm not sure an AI can ever be conscious."

That won't keep it from being "intelligent" in the sense of being very good at optimizing the world according to its preferences. A chess computer is great at optimizing the chess board according to its preferences, and it doesn't need to be conscious to do so.

 

Please post your own elevator pitches and responses in the comments, and vote for your favorites!

 

Making Beliefs Pay Rent (in Anticipated Experiences)

110 Eliezer_Yudkowsky 28 July 2007 10:59PM

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."

Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying "No," and the other saying "Yes," they do not anticipate any different experiences.  The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

continue reading »

How I Ended Up Non-Ambitious

113 Swimmer963 23 January 2012 11:50PM

I have a confession to make. My life hasn’t changed all that much since I started reading Less Wrong. Hindsight bias makes it hard to tell, I guess, but I feel like pretty much the same person, or at least the person I would have evolved towards anyway, whether or not I spent those years reading about the Art of rationality.

But I can’t claim to be upset about it either. I can’t say that rationality has undershot my expectations. I didn’t come to Less Wrong expecting, or even wanting, to become the next Bill Gates; I came because I enjoyed reading it, just like I’ve enjoyed reading hundreds of books and websites. 

In fact, I can’t claim that I would want my life to be any different. I have goals and I’m meeting them: my grades are good, my social skills are slowly but steadily improving, I get along well with my family, my friends, and my boyfriend. I’m in good shape financially despite making $12 an hour as a lifeguard, and in a year and a half I’ll be making over $50,000 a year as a registered nurse. I write stories, I sing in church, I teach kids how to swim. Compared to many people my age, I'm pretty successful. In general, I’m pretty happy.

Yvain suggested akrasia as a major limiting factor for why rationalists fail to have extraordinarily successful lives. Maybe that’s true for some people; maybe they are some readers and posters on LW who have big, exciting, challenging goals that they consistently fail to reach because they lack motivation and procrastinate. But that isn’t true for me. Though I can’t claim to be totally free of akrasia, it hasn’t gotten much in the way of my goals. 

However, there are some assumptions that go too deep to be accessed by introspection, or even by LW meetup discussions. Sometimes you don't even realize they’re assumptions until you meet someone who assumes the opposite, and try to figure out why they make you so defensive. At the community meetup I described in my last post, a number of people asked me why I wasn’t studying physics, since I was obviously passionate about it. Trust me, I had plenty of good justifications for them–it’s a question I’ve been asked many times–but the question itself shouldn’t have made me feel attacked, and it did.

Aside from people in my life, there are some posts on Less Wrong that cause the same reaction of defensiveness. Eliezer’s Mandatory Secret Identities is a good example; my automatic reaction was “well, why do you assume everyone here wants to have a super cool, interesting life? In fact, why do you assume everyone wants to be a rationality instructor? I don’t. I want to be a nurse.”

After a bit of thought, I’ve concluded that there’s a simple reason why I’ve achieved all my life goals so far (and why learning about rationality failed to affect my achievements): they’re not hard goals. I’m not ambitious. As far as I can tell, not being ambitious is such a deep part of my identity that I never even noticed it, though I’ve used the underlying assumptions as arguments for why my goals and life decisions were the right ones.

continue reading »

Extreme Rationality: It's Not That Great

140 Yvain 09 April 2009 02:44AM

Related to: Individual Rationality is a Matter of Life and Death, The Benefits of Rationality, Rationality is Systematized Winning
But I finally snapped after reading: Mandatory Secret Identities

Okay, the title was for shock value. Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

For this post, I will be using "extreme rationality" or "x-rationality" in the sense of "techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training." It seems pretty uncontroversial that there are massive benefits from going from a completely irrational moron to the average intelligent person's level. I'm coining this new term so there's no temptation to confuse x-rationality with normal, lower-level rationality.

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

So, what are these "benefits" of "x-rationality"?

A while back, Vladimir Nesov asked exactly that, and made a thread for people to list all of the positive effects x-rationality had on their lives. Only a handful responded, and most responses weren't very practical. Anna Salamon, one of the few people to give a really impressive list of benefits, wrote:

I'm surprised there are so few apparent gains listed. Are most people who benefited just being silent? We should expect a certain number of headache-cures, etc., just by placebo effects or coincidences of timing.

There have since been a few more people claiming practical benefits from x-rationality, but we should generally expect more people to claim benefits than to actually experience them. Anna mentions the placebo effect, and to that I would add cognitive dissonance - people spent all this time learning x-rationality, so it MUST have helped them! - and the same sort of confirmation bias that makes Christians swear that their prayers really work.

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn't entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics - syllogisms, fallacies, and the like - have been around much longer. The few groups who made a concerted effort to study x-rationality didn't shoot off an unusual number of geniuses - the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard's superstar followers, many of this century's most successful people have been notably irrational.

There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making better decisions than them not make you more successful?!?

This is a difficult question, but I think it has an answer. A complex, multifactorial answer, but an answer.

continue reading »

View more: Prev | Next