Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread, August 2010

4 Post author: NancyLebovitz 01 August 2010 01:27PM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (676)

Comment author: XFrequentist 01 August 2010 07:46:57PM *  21 points [-]

I'm intrigued by the idea of trying to start something like a PUA community that is explicitly NOT focussed on securing romantic partners, but rather the deliberate practice of general social skills.

It seems like there's a fair bit of real knowledge in the PUA world, that some of it is quite a good example of applied rationality, and that much of it could be extremely useful for purposes unrelated to mating.

I'm wondering:

  • if this is an interesting idea to LWers?
  • if this is the right venue to talk about it?
  • does something similar already exist?

I'm aware that there was some previous conversation around similar topics and their appropriateness to LW, but if there was final consensus I missed it. Please let me know if these matters have been deemed inappropriate.

Comment author: Violet 03 August 2010 06:34:15AM *  5 points [-]

If you want non-PC approaches there are two communities you could look into: sales-people and conning people. The second one actually has most of the how-to-hack-peoples minds. If you want a kinder version look at it titled "social engineering".

Comment author: cousin_it 01 August 2010 08:02:44PM *  4 points [-]

Toastmasters?

General social skills are needed in business, a lot of places teach them and they seem to be quite successful.

Comment author: SilasBarta 01 August 2010 08:08:59PM 5 points [-]

From my limited experience with Toastmasters, it's very PC and targeted at median-level intelligence people -- not the thing people here would be looking for. "PUA"-like implies XFrequentist is considering something that is willing to teach the harsh, condemned truths.

Comment author: XFrequentist 01 August 2010 08:30:25PM *  5 points [-]

I went to a Toastmasters session, and was... underwhelmed. Even for public speaking skills, the program seemed kind of trite. It was more geared toward learning the formalities of meetings. You'd probably be a better committee chair after following their program, but I'm not sure you could give a great TED talk or wow potential investors.

Carnegie's program seems closer to what I had in mind, but I want to replicate both the community aspect and the focus on "field" practice of the PUAs, which I suspect is a big part of what makes them so formidable.

Comment author: D_Alex 02 August 2010 01:33:54AM 2 points [-]

The clubs vary in their standard. I recommend you try a few in your area (big cities should have a bunch). For 2 years I used to commute 1 hour each way to attend Victoria Quay Toastmasters in Fremantle, it was that good. It was the 3rd club I tried after moving.

Comment author: hegemonicon 02 August 2010 12:44:21PM 20 points [-]

The game of Moral High Ground (reproduced completely below):

At last it is time to reveal to an unwitting world the great game of Moral High Ground. Moral High Ground is a long-playing game for two players. The following original rules are for one M and one F, but feel free to modify them to suit your player setup:

  1. The object of Moral High Ground is to win.

  2. Players proceed towards victory by scoring MHGPs (Moral High Ground Points). MHGPs are scored by taking the conspicuously and/or passive-aggressively virtuous course of action in any situation where culpability is in dispute.

(For example, if player M arrives late for a date with player F and player F sweetly accepts player M's apology and says no more about it, player F receives the MHGPs. If player F gets angry and player M bears it humbly, player M receives the MHGPs.)

  1. Point values are not fixed, vary from situation to situation and are usually set by the person claiming them. So, in the above example, forgiving player F might collect +20 MHGPs, whereas penitent player M might collect only +10.

  2. Men's MHG scores reset every night at midnight; women's roll over every day for all time. Therefore, it is statistically highly improbable that a man can ever beat a woman at MHG, as the game ends only when the relationship does.

  3. Having a baby gives a woman +10,000 MHG points over the man involved and both parents +5,000 MHG points over anyone without children.

My ex-bf and I developed Moral High Ground during our relationship, and it has given us years of hilarity. Straight coupledom involves so much petty point-scoring anyway that we both found we were already experts.

By making a private joke out of incredibly destructive gender programming, MHG releases a great deal of relationship stress and encourages good behavior in otherwise trying situations, as when he once cycled all the way home and back to retrieve some forgotten concert tickets "because I couldn't let you have the Moral High Ground points". We are still the best of friends.

Play and enjoy!

From Metafilter

Comment author: NancyLebovitz 02 August 2010 03:19:17PM 4 points [-]

The whole thread is about relationship hacks-- it's fascinating.

Comment author: sketerpot 02 August 2010 06:59:35PM 4 points [-]

One of the first comments is something I've been saying for a while, about how to admit that you were wrong about something, instead of clinging to a broken opinion out of stubborn pride:

Try to make it a personal policy to prove yourself WRONG on occasion. And get excited about it. Realizing you've been wrong about something is a sure sign of growth, and growth is exciting.

The key is to actually enjoy becoming less wrong, and to take pride in admitting mistakes. That way it doesn't take willpower, which makes everything so much easier.

Comment author: XiXiDu 08 August 2010 07:38:22PM 13 points [-]

LW database download?

I was wondering if it would be a good idea to offer a download of LW or at least the sequences and Wiki. In the manner that Wikipedia is providing it.

The idea behind it is to have a redundant backup in case of some catastrophe, for example if the same happens to EY that happened to John C. Wright. It could also provide the option to read LW offline.

Comment author: ciphergoth 08 August 2010 09:11:41PM 13 points [-]

That's incredibly sad.

Every so often, people derisively say to me "Oh, and you assume you'd never convert to religion then?" I always reply "I absolutely do not assume that, it might happen to me; no-one is immune to mental illness."

Comment author: Eliezer_Yudkowsky 08 August 2010 09:22:40PM 12 points [-]

Tricycle has the data. Also if an event of JCW magnitude happened to me I'm pretty sure I could beat it. I know at least one rationalist with intense religious experiences who successfully managed to ask questions like "So how come the divine spirit can't tell me the twentieth digit of pi?" and discount them.

Comment author: Unknowns 09 August 2010 06:56:07AM 3 points [-]

Actually, you have to be sure that you wouldn't convert if you had John Wright's experiences, otherwise Aumann's agreement theorem should cause you to convert already, simply because John Wright had the experiences himself-- assuming you wouldn't say he's lying. I actually know someone who converted to religion on account of a supposed miracle, who said afterward that since they in fact knew before converting that other people had seen such things happen, they should have converted in the first place.

Although I have to admit I don't see why the divine spirit would want to tell you the 20th digit of pi anyway, so hopefully there would be a better argument than that.

Comment author: arundelo 16 August 2010 06:58:08PM 2 points [-]
Comment author: Unknowns 09 August 2010 06:48:39AM 2 points [-]

However, if EY converted to religion, he would (in that condition) assert that he had had good reasons for doing it, i.e. that it was rational. So he would have no reason to take down this website anyway.

Comment author: nawitus 08 August 2010 09:15:43PM 2 points [-]

You can use the wget program like this: 'wget -m lesswrong.com'. A database download would be easier on the servers though.

Comment author: Matt_Simpson 03 August 2010 11:11:35PM *  13 points [-]

In his bio over at Overcoming Bias, Robin Hanson writes:

I am addicted to “viewquakes”, insights which dramatically change my world view.

So am I. I suspect you are too, dear reader. I asked Robin how many viewquakes he had and what caused them, but haven't gotten a response yet. But I must know! I need more viewquakes. So I propose we share our own viewquakes with each other so that we all know where to look for more.

I'll start. I've had four major viewquakes, in roughly chronological order:

  • (micro)Economics - Starting with a simple approximation of how humans behave yields a startlingly effective theory in a wide range of contexts.
  • Bayesianism - I learned how to think
  • Yudkowskyan/Humean Metaethics - Making the move from Objective theories of morality to Subjectively Objective theories of morality cleared up a large degree of confusion in my map.
  • Evolution - This is a two part quake: evolutionary biology and evolutionary psychology. The latter is extremely helpful for explaining some of the behavior that economic theory misses and for understanding the inputs into economic theory (i.e., preferences).
Comment author: ABranco 05 August 2010 04:02:56AM *  5 points [-]

I've had some dozens of viewquakes, most minors, although it's hard to evaluate it in hindsight now that I take them for granted.

Some are somewhat commonplace here: Bayesianism, map–territory relations, evolution etc.

One that I always feel people should be shouting Eureka — and when they are not impressed I assume that this is old news to them (and is often not, as I don't see it reflected in their actions) — is the Curse of Knowledge: it's hard to be a tapper. I feel that being aware of it dramatically improved my perceptions in conversation. I also feel that if more people were aware of it, misunderstandings would be far less common.

Maybe worth a post someday.

Comment author: byrnema 05 August 2010 04:30:14AM *  8 points [-]

I can see how the Curse of Knowledge could be a powerful idea. I will dwell on it for a while -- especially the example given about JFK, as an example of a type of application that would be useful in my own life. (To remember to describe things using broad strokes that are universally clear, rather than technical and accurate,in contexts where persuasion and fueling interest is most important.)

For me, one of the main viewquakes of my life was a line I read from a little book of Kalil Gibran poems:

Your pain is the breaking of the shell that encloses your understanding.

It seemed to be a hammer that could be applied to everything.. Whenever I was unhappy about something, I thought about the problem a while until I identified a misconception. I fixed the misconception ("I'm not the smartest person in graduate school"; "I'm not as kind as I thought I was"; "That person won't be there for me when I need them") by assimilating the truth the pain pointed me towards, and the pain would dissipate. (Why should I expect graduate school to be easy? I'll just work harder. Kindness is what you actually do, not how you expect you'll feel. That person is fun to hang out with, but I'll need to find some closer friends.) After each disappointment, I felt stronger and the problem just bounced off me, without my being in denial about anything.

The "technique" failed me when a good friend of mine died. There was a lot of pain, and I tried to identify the truth that was cutting though, but I couldn't find one. Where did my friend go? There is a part of my brain, I realized, that simply cannot except on an emotional level that people are material. I believe that they are (I don't believe in a soul or an afterlife) but I simply couldn't connect the essence of my friend with 'gone'. If there was a truth there, it couldn't find a place in my mind.

This seems like a tangent. .. but just to demonstrate it's not all-powerful.

Comment author: ABranco 05 August 2010 10:04:42AM *  4 points [-]

Remarkable quote, thank you.

Reminded me of the Anorexic Hermit Crab Syndrome:

The key to pursuing excellence is to embrace an organic, long-term learning process, and not to live in a shell of static, safe mediocrity. Usually, growth comes at the expense of previous comfort or safety. The hermit crab is a colorful example of a creature that lives by this aspect of the growth process (albeit without our psychological baggage). As the crab gets bigger, it needs to find a more spacious shell. So the slow, lumbering creature goes on a quest for a new home. If an appropriate new shell is not found quickly, a terribly delicate moment of truth arises. A soft creature that is used to the protection of built-in armor must now go out into the world, exposed to predators in all its mushy vulnerability. That learning phase in between shells is where our growth can spring from. Someone stuck with an entity theory of intelligence is like an anorexic hermit crab, starving itself so it doesn't grow to have to find a new shell. —Josh Waitzkin, The Art of Learning

Comment author: fiddlemath 05 August 2010 06:00:49AM 4 points [-]

Sounds like the illusion of transparency. We've got that post around. ;)

On the other hand, the tapper/listener game is a very evocative instance.

Comment author: Johnicholas 01 August 2010 07:52:17PM 12 points [-]

Cryronics Lottery.

Would it be easier to sign up for cryonics if there was a lottery system? A winner of the lottery could say "Well, I'm not a die-hard cryo-head, but I thought it was interesting so I bought a ticket (which was only $X) and I happened to win, and it's pretty valuable, so I might as well use it."

It's a sort of "plausible deniability" that might reduce the social barriers to cryo. The lottery structure might also be able to reduce the conscientousness barriers - once you've won, then the lottery administrators (possibly volunteers, possibly funded by a fraction of the lottery) walk you through a "greased path".

Comment author: NihilCredo 02 August 2010 07:14:24PM 7 points [-]

On a completely serious, if not totally related, note: it would be a lot easier to convince people to sign up for cryonics if the Cryonics Institute's and/or KrioRus's websites looked more professional.

Comment author: Alicorn 02 August 2010 08:43:19PM 6 points [-]

I'm not sure if it would help get uninterested people interested; but I think it would help get interested people signed up if there were a really clear set of individually actionable instructions - perhaps a flowchart so they can depend on individual circumstances - that were all found in one place.

Comment author: katydee 02 August 2010 09:01:18PM 2 points [-]

And Rudi Hoffman's page.

Comment author: gwern 02 August 2010 04:31:39AM 4 points [-]

I doubt it. Signing up for a lottery for cryonics is still suspicious. There is only one payoff, and that is of the suspicious thing. No one objects to the end of lotteries because we all like money, what is objected to is the lottery as efficient means of obtaining money (or entertainment).

Suppose that the object were something you and I regard with equal revulsion as many regard cryonics. Child molestation, perhaps. Would you really regard someone buying a ticket as not being quite evil and condoning and supporting the eventual rape?

Comment author: AlexM 02 August 2010 10:23:00AM 5 points [-]

Who regards cryonics as evil like child molestation? General public sees cryonics as fraud - somethink like buying real estate on the moon or waiting for mothership, and someone paying for it as gullible fool.

For example, look at discussions when Britney Spears http://www.freerepublic.com/focus/f-chat/2520762/posts

wanted to be frozen. Lots of derision, no hatred.

Comment author: NihilCredo 02 August 2010 07:00:41PM 2 points [-]

Bad example. People want to make fun of celebrities (especially a community as caustic and "anti-elitist" as the Freepers). She could have announced that she was enrolling in college, or something else similarly common-sensible, and you would still have got a threadful of nothing but cheap jokes.

A discussion about "My neighbour / brother-in-law / old friend from high school told me he has decided to get frozen" would be more enlightening.

Comment author: NancyLebovitz 01 August 2010 02:13:33PM 12 points [-]

Letting Go by Atul Gawande is a description of typical end of life care in the US, and how it can and should be done better.

Typical care defaults to taking drastic measures to extend life, even if the odds of success are low and the process is painful.

Hospice care, which focuses on quality of life, not only results in more comfort, but also either no loss of lifespan or a somewhat longer life, depending on the disease. And it's a lot cheaper.

The article also describes the long careful process needed to find out what people really want for the end of their life-- in particular, what the bottom line is for them to want to go on living.

This is of interest for Less Wrong, not just because Gawande is a solidly rationalist writer, but because a lot of the utilitarian talk here goes in the direction of restraining empathic impulses.

Here we have a case where empathy leads to big utilitarian wins, and where treating people as having unified consciousness if you give it a chance to operate works out well.

As good as hospices sound, I'm concerned that if they get a better reputation, less competent organizations calling themselves hospices will spring up.

From a utilitarian angle, I wonder if those drastic methods of treatment sometimes lead to effective methods, and if so, whether the information could be gotten more humanely.

Comment author: Rain 01 August 2010 02:28:57PM 6 points [-]

End of life regulation is one reason cryonics is suffering, as well: without the ability to ensure preservation when the brain is still relatively healthy, the chances diminish significantly. I think it'd be interesting to see cryonics organizations put field offices in countries or states with legal suicide laws. Here's a Frontline special on suicide tourists.

Comment author: daedalus2u 01 August 2010 03:49:36PM 3 points [-]

The framing of the end of life issue as a gain or a loss as in the monkey token exchange probably makes a gigantic difference in the choices made.

http://lesswrong.com/lw/2d9/open_thread_june_2010_part_4/2cnn?c=1

When you feel you are in a desperate situation, you will do desperate things and clutch at straws, even when you know those choices are irrational. I think this is the mindset behind the clutching at straws that quacks exploit with CAM, as in the Gonzalez Protocol for pancreatic cancer.

http://www.sciencebasedmedicine.org/?p=1545

It is actually worse than doing nothing, worse than doing what main stream medicine recommends, but because there is the promise of complete recovery (even if it is a false promise), that is what people choose based on their irrational aversion to risk.

Comment author: Eneasz 02 August 2010 05:21:48PM *  8 points [-]

An ex-English Professor and ex-Cop, George Thompson, who now teaches a method he calls "Verbal Judo". Very reminiscent of Eliezer's Bayesian Dojo, this is a primer on rationalist communications techniques, focusing on defensive & redirection tactics. http://fora.tv/2009/04/10/Verbal_Judo_Diffusing_Conflict_Through_Conversation

Comment author: sketerpot 07 August 2010 12:10:12AM 14 points [-]

I wrote up some notes on this, because there's no transcript and it's good information. Let's see if I can get the comment syntax to cooperate here.

How to win in conversations, in general.

Never get angry. Stay calm, and use communication tactically to achieve your goals. Don't communicate naturally; communicate tactically. If you get upset, you are weakened.

How to deflect.

To get past an unproductive and possibly angry conversation, you need to deflect the unproductive bluster and get down to the heart of things: goals, and how to achieve them. Use a sentence of the form:

"[Acknowledge what the other guy said], but/however/and [insert polite, goal-centered language here]."

You spring past what the other person said, and then recast the conversation in your own terms. Did he say something angry, meant to upset you? Let it run off you like water, and move on to what you want the conversation to be about. This disempowers him and puts you in charge.

How to motivate people.

There's a secret to motivating people, whether they're students, co-workers, whatever. To motivate someone, raise his expectations of himself. Don't put people down; raise them up. When you want to reprimand someone for not living up to your expectations, mention the positive first. Raise his expectations of himself.

Empathy

To calm somebody down, or get him to do what you want, empathy is the key. Empathy, the ability to see through the eyes of another, is one of the greatest powers that humans have. It gives you power over people, of a kind that they won't get mad about. Understand the other guy, and then think for him as he ought to think. The speaker worked as a police officer, so most of the people he dealt with were under the influence of something. Maybe they were drugged, or drunk; maybe they were frightened, or outraged. Whatever it is, it clouds their judgement; be the levelheaded one and help them think clearly. Empathy is what you need for this.

How to interrupt someone.

Use the most powerful sentence in the English language: "Let me see if I understand what you just said." It shuts anybody up, without pissing them off, and they'll listen. Even if they're hopping mad and were screaming their lungs out at you a minute ago, they'll listen. Use this sentence, and then paraphrase what you understand them as saying. When you paraphrase, that lets you control the conversation. You get to put their point of view in your own words, and in doing so, you calm them down and sieze control of the conversation.

How to be a good boss.

This was a talk at Colombia University business school; people came to learn how to be good bosses. And the secret is that if you're a boss, don't focus directly on your own career; focus on lifting up the people under you. Do this, and they will lift you up with them. To be powerful in a group setting, you must disappear. Put your own ego aside, don't worry about who gets the credit, and focus on your goals.

How to discipline effectively.

This is his biggest point. The secret of good discipline is to use language disinterestedly. You can show anger, condescension, irritation, etc., OR you can discipline somebody. You can't do both at the same time. If you show anger when disciplining someone, you give them an excuse to be angry, and you destroy your own effectiveness. Conversely, if you want to express anger, then don't let punishment even enter the conversation. Keep these separate.

How to deal with someone who says no.

There are five stages to this. Try the first one; if it fails, go to the next one, and so on. Usually you won't have to go past the first one or two.

  1. Ask. Be polite. Interrogative tone. "Sir, will you please step out of the car?" This usually works, and the conversation ends here.

  2. Tell him why. Declarative tone. This gives you authority, it's a sign of respect, and it gives the other guy a way of saving face. It builds a context for what you're asking. If asking failed, explaining usually works. "I see an open liquor bottle in your cup-holder, and I'm required by law to search your vehicle. For our safety, I need you to step out of the car."

  3. Create and present options. There are four secrets for this:

    • Voice: friendly and respectful.

    • Always list good options first ("You can go home tonight, have dinner with your family, sleep in your own bed."). Then the bad options ("If you don't get out of this car, the law says you're going to jail overnight, and you'll get your car towed, and they'll charge you like 300 bucks."). Then remind him of the good options, to get the conversation back to what you want him to do. ("I just need you to get out of your car, let me have a look around, and we'll be done in a few minutes.")

    • Be specific. Paint a mental picture for people. Vivid imagery. WIIFM: What's In It For Me? Appeal to the other guy's self-interest. It's not about you; it's about him.

  4. Confirm noncompliance. "Is there anything I can say to get you to cooperate, and step out of the car for me, so you don't go to jail?" Give them a way to save face.

  5. Act -- Disengage or escalate. This is the part where you either give up or get serious. In the "get out of the car" example, this is the part where you arrest him. Very seldom does it get to this stage, if you did the previous stages right.

If you want more on verbal judo, watch the video; he's a good speaker.

Comment author: NancyLebovitz 07 August 2010 12:19:20AM 3 points [-]

Thank you for writing this up.

The one thing I wondered about was whether the techniques for getting compliance interfere with getting information. For example, what if someone who isn't consenting to a search is actually right about the law?

Comment author: mattnewport 07 August 2010 12:14:34AM 2 points [-]

Does the talk provide any evidence for the efficacy of the tactics?

Comment author: JenniferRM 02 August 2010 10:51:23PM 4 points [-]

Thanks. That was a compact and helpful 90 minutes. The first 30 minutes were OK, but the 2nd 30 were better, and the 3rd was the best. Towards the end I got the impression that he was explaining lessons that were the kind of thing people spend 5 years learning the hard way and that lots of people never learn for various reasons.

Comment author: Blueberry 02 August 2010 11:31:06PM 2 points [-]

That sounds really interesting. I wish there were a transcript available!

Comment author: Matt_Simpson 02 August 2010 05:02:37PM *  8 points [-]

Was Kant implicitly using UDT?

Consider Kant's categorical imperative. It says, roughly, that you should act such that you could will your action as a universal law without undermining the intent of the action. For example, suppose you want to obtain a loan for a new car and never pay it back - you want to break a promise. In a world where everyone broke promises, the social practice of promise keeping wouldn't exist and thus neither would the practice of giving out loans. So you would undermine your own ends and thus, according to the categorical imperative, you shouldn't get a loan without the intent to pay it back.

Another way to put Kant's position would be that you should choose such that you are choosing for all other rational agents. What does UDT tell you to do? It says (among other things) that you should choose such that you are choosing for every agent running the same decision algorithm as yourself. It wouldn't be a stretch to call UDT agents rational. So Kant thinks we should be using UDT! Of course, Kant can't draw the conclusions he wants to draw because no human is actually using UDT. But that doesn't change the decision algorithm Kant is endorsing.

Except... Kant isn't a consequentialist. If the categorical imperative demands something, it demands it no matter the circumstances. Kant famously argued that lying is wrong, period. Even if the fate of the world depends on it.

So Kant isn't really endorsing UDT, but I thought the surface similarity was pretty funny.

Comment author: Emile 03 August 2010 08:04:27AM *  2 points [-]

Kant famously argued that lying is wrong, period. Even if the fate of the world depends on it.

I remember Eliezer saying something similar, though I can't find it right now (the closest I could find was this ). It was something about the benefits of being the kind of person that doesn't lie, even if the fate of the world is at stake. Because if you aren't, the minute the fate of the world is at stake is the minute your word becomes worthless.

Comment author: SilasBarta 02 August 2010 05:26:23PM *  2 points [-]

Drescher has some important things to say about this distinction in Good and Real. What I got out of it, is that the CI is justifiable on consequentialist or self-serving grounds, so long as you relax the constraint that you can only consider the causal consequences (or "means-end links") of your decisions, i.e., things that happen "futureward" of your decision.

Drescher argues that specifically ethical behavior is distinguished by its recognition of these "acausal means-end links", in which you act for the sake of what would be the case if-counterfactually you would make that decision, even though you may already know the result. (Though I may be butchering it -- it's tough to get my head around the arguments.)

And I saw a parallel between Drescher's reasoning and UDT, as the former argues that your decisions set the output of all similar processes to the extent that they are similar.

Comment author: Sniffnoy 05 August 2010 11:13:39PM 7 points [-]

I found TobyBartels's recent explanation of why he doesn't want to sign up for cryonics a useful lesson in how different people's goals in living a long time (or not) can be from mine. Now I am wondering if maybe it would be a good idea to state some of the reasons people would want to wake up 100 years later if hit by a bus. Can't say I've been around here very long but it seems to me it's been assumed as some sort of "common sense" - is that accurate? I was wondering if other people's reasons for signing up / intending to sign up (I am not currently signed up and probably will not get around to such for several years) also differed interestingly from mine. Or is this too off topic?

As for me, I would think the obvious reason is what Hilbert said: "If I were to awaken after having slept for a thousand years, my first question would be: Has the Riemann hypothesis been proven?" Finding yourself in the future means you now have the answers to a lot of previously open problems! As well as getting to learn the history of what happened after you were frozen. I have for a long time found not getting to learn the future history of the world to be the most troubling aspect of dying.

(Posting this here as it seems a bit off-topic under The Threat of Cryonics.)

Comment author: steven0461 05 August 2010 11:48:46PM *  7 points [-]

It sure seems like a lot of people could feed their will to live by reading just the first half of an exciting fiction book.

Comment author: Sewing-Machine 06 August 2010 12:01:42AM 4 points [-]

We would need to drastically strengthen norms against spoilers.

Comment author: NancyLebovitz 05 August 2010 11:28:45PM 6 points [-]

One thought is that it's tempting to think of yourself as being the only one (presumably with help from natives) trying to deal with the changed world.

Actually I think it's more likely that there will be many people from your era, and there will be immigrants' clubs, with people who've been in the future for a while helping the greenhorns. I find this makes the future seem more comfortable.

The two major reasons I can think of for wanting to be in the future is that I rather like being me, and the future should be interesting.

Comment author: sketerpot 02 August 2010 08:09:44AM 7 points [-]

I've been on a Wikipedia binge, reading about people pushing various New Age silliness. The tragic part is that a lot of these guys actually do sound fairly smart, and they don't seem to be afflicted with biological forms of mental illness. They just happen to be memetically crazy in a profound and crippling way.

Take Ervin Laszlo, for instance. He has a theory of everything, which involves saying the word "quantum" a lot and talking about a mystical "Akashic Field" which I would describe in more detail except that none of the explanations of it really say much. Here's a representative snippet from Wikipedia:

László describes how such an informational field can explain why our universe appears to be fine-tuned as to form galaxies and conscious lifeforms; and why evolution is an informed, not random, process. He believes that the hypothesis solves several problems that emerge from quantum physics, especially nonlocality and quantum entanglement.

Then we have pages like this one, talking more about the Akashic Records (because apparently it's a quantum field thingy and also an infinite library or something). The very first sentence sums it up: "The Akashic Records refer to the frequency gird programs that create our reality." Okay, actually that didn't sum up crap; but it sounded cool, didn't it? That page is full of references to the works of various people, cited very nicely, and the spelling and grammar suggest someone with education. There are a lot of pages like this floating around. The thing they all have in common is that they don't seem to consider evidence to be important. It's not even on their radar.

Scholarly writings from New Age people is a pretty breathtaking example of dark side epistemology, if anybody wants a case study in exactly what not to do. It's pretty intense.

Comment author: XiXiDu 06 August 2010 08:29:33AM 6 points [-]

Interesting SF by Robert Charles Wilson!

I normally stay away from posting news to lesswrong.com - although I think an Open Thread for relevant news items would be a good idea - but this one sounds especially good and might be of interest for people visiting this site...

Many-Worlds in Fiction: "Divided by Infinity"

In the year after Lorraine's death I contemplated suicide six times. Contemplated it seriously, I mean: six times sat with the fat bottle of Clonazepam within reaching distance, six times failed to reach for it, betrayed by some instinct for life or disgusted by my own weakness.

I can't say I wish I had succeeded, because in all likelihood I did succeed, on each and every occasion. Six deaths. No, not just six. An infinite number.

Times six.

There are greater and lesser infinities.

But I didn't know that then.

Comment author: humpolec 08 August 2010 01:38:04PM 5 points [-]

Thank you.

The idea reminded me of Moravec's thoughts on death:

When we die, the rules surely change. As our brains and bodies cease to function in the normal way, it takes greater and greater contrivances and coincidences to explain continuing consciousness by their operation. We lose our ties to physical reality, but, in the space of all possible worlds, that cannot be the end. Our consciousness continues to exist in some of those, and we will always find ourselves in worlds where we exist and never in ones where we don't. The nature of the next simplest world that can host us, after we abandon physical law, I cannot guess. Does physical reality simply loosen just enough to allow our consciousness to continue? Do we find ourselves in a new body, or no body? It probably depends more on the details of our own consciousness than did the original physical life. Perhaps we are most likely to find ourselves reconstituted in the minds of superintelligent successors, or perhaps in dreamlike worlds (or AI programs) where psychological rather than physical rules dominate. Our mind children will probably be able to navigate the alternatives with increasing facility. For us, now, barely conscious, it remains a leap in the dark.

Comment author: Eliezer_Yudkowsky 08 August 2010 05:50:27PM 3 points [-]

I already wrote this fic ("The Grand Finale of the Ultimate Meta Mega Crossover").

Comment author: XiXiDu 08 August 2010 06:38:26PM *  2 points [-]

I wouldn't be surprised to find out that many people who know about you and the SIAI are oblivious of your fiction. At least I myself only found out about it some time after learning about you and SIAI.

It is generally awesome stuff and would be enough in itself to donate to SIAI. Spreading such fiction stories might actually attract more people to dig deeper and find out about SIAI than to be be thrown in at the deep end.

Edit: I myself came to know about SIAI due to SF, especially Orion's Arm.

Comment author: SilasBarta 01 August 2010 07:25:02PM *  6 points [-]

I thought I'd pose an informal poll, possibly to become a top-level, in preparation for my article about How to Explain.

The question: on all the topics you consider yourself an "expert" or "very knowledgeable about", do you believe you understand them at least at Level 2? That is, do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?

Or, to put it another way, do you think that, given enough time, but using only your present knowledge, you could teach a reasonably-intelligent layperson, one-on-one, to understand complex topics in your expertise, teaching them every intermediate topic necessary for grounding the hardest level?

Edit: Per DanArmak's query, anything you can re-derive or infer from your present knowledge counts as part of your present knowledge for purposes of answering this question.

I'll save my answer for later -- though I suspect many of you already know it!

Comment author: Oscar_Cunningham 02 August 2010 03:10:47PM *  5 points [-]

I have a (I suspect unusual) tendency to look at basic concepts and try to see them in as many ways as possible. For example, here are seven equations, all of which could be referred to as Bayes' Theorem:

However, each one is different, and forces a different intuitive understanding of Bayes' Theorem. The fourth one down is my favourite, as it makes obvious that the update depends only on the ratio of likelihoods. It also gives us our motivation for taking odds, since this clears up the 1/(1+x)ness of the equation.

Because of this way of understanding things, I find explanations easy, because if one method isn't working, another one will.

ETA: I'd love to see more versions of Bayes' Theorem, if anyone has any more to post.

Comment author: DanArmak 01 August 2010 07:48:51PM 3 points [-]

using only your present knowledge

This strikes me as an un-lifelike assumption. If I had to explain things in this way, I would expect to encounter some things that I don't explicitly know (and other that I knew and have forgotten), and to have to (re)derive them. But I expect that I would be able to rederive almost all of them.

Refining my own understanding is a natural part of building a complex explanation-story to tell to others, and will happen unless I've already built this precise story before and remember it.

Comment author: SilasBarta 01 August 2010 07:53:08PM 3 points [-]

For purposes of this question, things you can rederive from your present knowledge count as part of your present knowledge.

Comment author: JanetK 02 August 2010 08:08:06AM 2 points [-]

I think I have level 2 understanding of many areas of Biology but of course not all of it. It is too large a field. But there are gray areas around my high points of understanding where I am not sure how deep my understanding would go unless it was put to the test. And around the gray areas surrounding the level 2 areas there is a sea of superficial understanding. I have some small areas of computer science at level 2 but they are fewer and smaller, ditto chemistry and geology. I think your question overlooks the nature of teaching skills. I am pretty good at teaching (verbally and one/few to one) and did it often for years. There is a real knack in finding the right place to start and the right analogies to use with a particular person. Someone could have more understanding than me and not be able to transfer that understanding to someone else. And others could have less understanding and transfer it better. Finally I like your use of the word 'understanding' rather than 'knowledge'. It implies the connectedness with other areas required to relate to lay people.

Comment author: zero_call 02 August 2010 12:48:50AM *  2 points [-]

I will reply to this in the sense of

"do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?",

since I am not so familiar with the formalism of a "Level 2" understanding.

My uninteresting, simple answer is: yes.

My philosophical answer is that I find the entire question to be very interesting and strange. That is, the relationship between teaching and understanding is quite strange IMO. There are many people who are poor teachers but who excel in their discipline. It seems to be a contradiction because high-level teaching skill seems to be a sufficient, and possibly necessary condition for masterful understanding.

Personally I resolve this contradiction in the following way. I feel like my own limitations make it to where I am forced to learn a subject by progressing at it in very simplistic strokes. By the time I have reached a mastery, I feel very capable of teaching it to others, since I have been forced to understand it myself in the most simplistic way possible.

Other people, who are possibly quite brilliant, are able to master some subjects without having to transmute the information into a simpler level. Consequentially, they are unable to make the sort of connections that you describe as being necessary for teaching.

Personally I feel that the latter category of people must be missing something, but I am unable to make a convincing argument for this point.

Comment author: SilasBarta 02 August 2010 01:15:29AM *  3 points [-]

A lot of the questions you pose, including the definition of the Level 2 formalism, are addressed in the article I linked (and wrote).

I classify those who can do something well but not explain or understand the connections from the inputs and outputs to the rest of the world, to be at a Level 1 understanding. It's certainly an accomplishment, but I agree with you that it's missing something: the ability to recognize where it fits in with the rest of reality (Level 2) and the command of a reliable truth-detecting procedure that can "repair" gaps in knowledge as they arise (Level 3).

"Level 1 savants" are certainly doing something very well, but that something is not a deep understanding. Rather, they are in the position of a computer that can transform inputs into the right outputs, but do nothing more with them. Or a cat, which can fall from great heights without injury, but not know why its method works.

(Yes, this comment seems a bit internally repetitive.)

Comment author: NancyLebovitz 01 August 2010 08:41:53PM 2 points [-]

I think I know a fair amount about doing calligraphy, but I'm dubious that someone could get a comparable level of knowledge without doing a good bit of calligraphy themselves.

If I were doing a serious job of teaching, I would be learning more about how to teach as I was doing it.

I consider myself to be a good but not expert explainer.

Possibly of interest: The 10-Minute Rejuvenation Plan: T5T: The Revolutionary Exercise Program That Restores Your Body and Mind : a book about an exercise system which involves 5 yoga moves. It's by a woman who'd taught 700 people how to do the system, and shows an extensive knowledge of the possible mistakes students can make and adaptations needed to make the moves feasible for a wide variety of people.

My point is that explanation isn't an abstract perfectible process existing simply in the mind of a teacher.

Comment author: KrisC 01 August 2010 09:35:47PM 3 points [-]

But in some limited areas explanation is completely adequate.

I taught co-worker how to do sudoku puzzles. After teaching him the human-accessible algorithms and allowing time for practice, I was still consistently beating his time. I knew why, and he didn't. After I explained the difference in mental state I was using, he began beating my time on regular basis. {Instead of checking the list of 1-9 for each box or line, allow your brain to subconsciously spot the missing number and then verify its absence.} He is more motivated and has more focus, while I do puzzles to kill time when waiting.

In another job where I believe I had a thorough understanding of the subject, I was never able to teach any of my (~20) trainees to produce vector graphic maps with the speed and accuracy I obtained because I was unable to impart a mathematical intuition for the approximation of curves. I let them go home with full pay when they completed their work, so they definitely had motivation. But they also had editors who were highly detail oriented.

I mean to suggest that there is a continuum of subjective ability comparing different skills. Sudoku is highly procedural, once familiar all that is required is concentration. Yoga, in the sense mentioned above, is also procedural, proscriptive; the joints allow a limited number of degrees of freedom. Calligraphy strives for an ideal, but depending on the tradition, there is a degree of interpretation allowed for aesthetic considerations. Mapping, particularly in vector graphics, has many ways to be adequate and no way to be perfect.

The number of acceptable outcomes and the degree of variation in useful paths determines the teach-ability of a skillset. The procedural skills can be taught more easily than the subjective, and practice is useful to accomplish mastery of procedural skills. Deeper understanding of a field allows more of the skill's domain to be expressed procedurally rather than subjectively.

Comment author: fiddlemath 01 August 2010 07:57:27PM 2 points [-]

I think that the "teaching" benchmark you claim here is actually a bit weaker than a Level 2 understanding. To successfully teach a topic, you don't need to know lots of connections between your topic and everything else; you only need to know enough such connections to convey the idea. I really think this lies somewhere between Level 1 and Level 2.

I'll claim to have Level 2 understanding on the core topics of my graduate research, some mathematics, and some core algorithmic reasoning. I'm sure I don't have all of the connections between these things and the rest of my world model, but I do have many, and they pervade my understanding.

Comment author: SilasBarta 01 August 2010 08:05:03PM *  2 points [-]

I think that the "teaching" benchmark you claim here is actually a bit weaker than a Level 2 understanding. To successfully teach a topic, you don't need to know lots of connections between your topic and everything else; you only need to know enough such connections to convey the idea. I really think this lies somewhere between Level 1 and Level 2.

I agree in the sense that full completion of Level 2 isn't necessary to do what I've described, as that implies a very deeply-connected set of models, truly pervading everything you know about.

But at the same time, I don't think you appreciate some of the hurdles to the teaching task I described: remember, the only assumption is that the student has lay knowledge and is reasonably intelligent. Therefore, you do not get to assume that they find any particular chain of inference easy, or that they already know any particular domain above the lay level. This means you would have to be able to generate alternate inferential paths, and fall back to more basic levels "on the fly", which requires healthy progress into Level 2 in order to achieve -- enough that it's fair to say you "round to" Level 2.

I'll claim to have Level 2 understanding on the core topics of my graduate research, some mathematics, and some core algorithmic reasoning. I'm sure I don't have all of the connections between these things and the rest of my world model, but I do have many, and they pervade my understanding.

If so, I deeply respect you and find that you are the exception and not the rule. Do you find yourself critical of how people in the field (i.e. through textbooks, for example) present it to newcomers (who have undergrad prerequisites), present it to laypeople, and use excessive or unintuitive jargon?

Comment author: fiddlemath 01 August 2010 08:20:15PM 3 points [-]

Therefore, you do not get to assume that they find any particular chain of inference easy, or that they already know any particular domain above the lay level. This means you would have to be able to generate alternate inferential paths, and fall back to more basic levels "on the fly", which requires healthy progress into Level 2 in order to achieve -- enough that it's fair to say you "round to" Level 2.

I agree that the teaching task does require a thick bundle of connections, and not just a single chain of inferences. So much so, actually, that I've found that teaching, and preparing to teach, is a pretty good way to learn new connections between my Level 1 knowledge and my world model. That this "rounds" to Level 2 depends, I suppose, on how intelligent you assume the student is.

If so, I deeply respect you and find that you are the exception and not the rule. Do you find yourself critical of how people in the field (i.e. through textbooks) present it to newcomers (who have undergrad prerequisites), present it to laypeople, and use excessive or unintuitive jargon?

Yes, constantly. Frequently, I'm frustrated by such presentations to the point of anger at the author's apparent disregard for the reader, even when I understand what they're saying.

Comment author: [deleted] 30 August 2010 11:41:25PM *  5 points [-]

PZ Meyers' comments on Kurzweil generated some controversy here recently on LW--see here. Apparently PZ doesn't agree with some of Kurzweil's assumptions about the human mind. But that's besides the point--what I want want to discuss is this: according to another blog, Kurzweil has been selling bogus nutritional supplements. What does everyone think of this?

Comment author: jimrandomh 30 August 2010 11:48:21PM 2 points [-]

I would like a better source than a blog comment for the claim that Kurzweil has been selling bogus nutritional supplements. The obvious alternative possibility is that someone else, with less of a reputation to worry about, attached Kurzweil's name to their product without his knowledge.

Comment author: [deleted] 31 August 2010 12:05:34AM 3 points [-]

Ok, I've found some better sources. See the first three links.

Comment author: jimrandomh 31 August 2010 06:01:30AM 5 points [-]

I would have preferred a more specific link than that, to save me the time of doing a detailed investigation of Kurzweil's company myself. But I ended up doing one anyways, so here are the results.

That "Ray and Terry's Longevity Products" company's front page screams low-credibility. It displays three things: an ad for a book, which I can't judge as I don't have a copy, an ad for snack bars, and a news box. Neutral, silly, and, ah, something amenable to a quality test!

The current top headline in their Healthy Headlines box looked to me like an obvious falsehood ("Dirty Electricity May Cause Type 3 Diabetes"), and on a topic important to me, so I followed it up. It links to a blog I don't recognize, which dug it out of a two year old study, which I found on PubMed. And I personally verified that the study was wrong - by the most generous interpretation, assuming no placebo effect or publication bias (both of which were obviously present), the study contains exactly 4 bits of evidence (4 case studies in which the observed outcome had a 50% chance of happening assuming the null hypothesis, and a 100% chance of happening assuming the conclusion). A review article confirmed that it was flawed.

That said, he probably just figured the news box was unimportant and delegated the job to someone who wasn't smart enough to keep the lies out. But it means I can't take anything else on the site seriously without a very time-consuming investigation, which is bad enough.

The bit about Kurzweil taking 250 nutritional supplements per day jumps out, too, since it's an obviously wrong thing to do; the risks associated with taking a supplement (adverse reaction, contamination, mislabeling) scale linearly with the number taken, while the upside has diminishing returns. You take the most valuable thing first, then the second-most, by the time you get to the 250th thing it's a duplicate or worthless. Which leads me to believe that he just fudged the number, by counting things that are properly considered duplicates like split doses of the same thing.

Comment author: utilitymonster 03 August 2010 07:09:27PM *  5 points [-]

If you want to eliminate hindsight bias, write down some reasons that you think justify your judgment.

Those who consider the likelihood of an event after it has occurred exaggerate their likelihood of having been able to predict that event in advance. We attempted to eliminate this hindsight bias among 194 neuropsychologists. Foresight subjects read a case history and were asked to estimate the probability of three different diagnoses. Subjects in each of the three hindsight groups were told that one of the three diagnoses was correct and were asked to state what probability they would have assigned to each diagnosis if they were making the original diagnosis. Foresight-reasons and hindsight-reasons subjects performed the same task as their foresight and hindsight counterparts, except they had to list one reason why each of the possible diagnoses might be correct. The frequency of subjects succumbing to the hindsight bias was lower in the hindsight-reasons groups than in the hindsight groups not asked to list reasons.

ARKES, H.R., et al., 1988. Eliminating the hindsight bias. Journal of applied psychology.

Comment author: gwern 09 August 2010 05:47:04AM 4 points [-]

"The differences are dramatic. After tracking thousands of civil servants for decades, Marmot was able to demonstrate that between the ages of 40 and 64, workers at the bottom of the hierarchy had a mortality rate four times higher than that of people at the top. Even after accounting for genetic risks and behaviors like smoking and binge drinking, civil servants at the bottom of the pecking order still had nearly double the mortality rate of those at the top."

"Under Pressure: The Search for a Stress Vaccine" http://www.wired.com/magazine/2010/07/ff_stress_cure/all/1

Comment author: sketerpot 08 August 2010 02:14:48AM *  4 points [-]

What simple rationality techniques give the most bang for the buck? I'm talking about techniques you might be able to explain to a reasonably smart person in five minutes or less: really the basics. If part of the goal here is to raise the sanity waterline in the general populace, not just among scientists, then it would be nice to have some rationality techniques that someone can use without much study.

Carl Sagan had a slogan: "Extraordinary claims require extraordinary evidence." He would say this phrase and then explain how, when someone claims something extraordinary (i.e. something for which we have a very low probability estimate), they need correspondingly stronger evidence than if they'd made a higher-likelihood claim, like "I had a sandwich for lunch." Now, I'm sure everybody here can talk about this very precisely, in terms of Bayesian updating and odds ratios, but Sagan was able to get a lot of this across to random laypeople in about a minute. Maybe two minutes.

What techniques for rationality can be explained to a normal person in under five minutes? I'm looking for small and simple memes that will make people more rational, on average. I'll try a few candidates, to get the discussion started.

Candidate 1: Carl Sagan's concise explanation of how evidence works, as mentioned above.

Candidate 2: Everything that has an effect in the real world is part of the domain of science (and, more broadly, rationality). A lot of people have the truly bizarre idea that some theories are special, immune to whatever standards of evidence they may apply to any other theory. My favorite example is people who believe that prayers for healing actually make people who are prayed for more likely to recover, but that this cannot be scientifically tested. This is an obvious contradiction: they're claiming a measurable effect on the world and then pretending that it can't possibly be measured. I think that if you pointed out a few examples of this kind of special pleading to people, they might start to realize when they're doing it.

Candidate 3: Admitting that you were wrong is a way of winning an argument. There's a saying that "It takes a big man to admit he's wrong," and when people say this, they don't seem to realize that it's a huge problem! It shouldn't be hard to admit that you were wrong about something! It shouldn't feel like defeat; it should feel like victory. When you lose an argument with someone, it should be time for high fives and mutual jubilation, not shame and anger. I know that it's possible to retrain yourself to feel this way, because I've done it. This wasn't even too difficult; it was more a matter of just realizing that feeling good about conceding an argument was even an option.

Anti-candidate: "Just because something feels good doesn't make it true." I call this an anti-candidate because, while it's true, it's seldom helpful. People trot out this line as an argument against other people's ideas, but rarely apply it to their own. I want memes that will make people actually be more rational, instead of just feeling that way.

Any ideas? I know that the main goal of this community is to strive for rationality far beyond such low-hanging fruit, but if we can come up with simple and easy techniques that actually help people be more rational, there's a lot of value in that. You could use it as rationalist propaganda, or something.

EDIT: I've expanded this into a top-level post.

Comment author: DuncanS 09 August 2010 12:58:34AM *  4 points [-]

I think some of the statistical fallacies that most people fall for are quite high up the list.

One such is the "What a coincidence!" fallacy. People notice that some unlikely event has occurred, and wonder how many millions to one against this event must have been - and yet it actually happenned ! Surely this means that my life is influenced by some supernatural influence!

The typical mistake is to simply calculate the likelihood of the occurrence of the particular event that occurred. Nothing wrong with that, but one should also compare that number against the whole basket of other possible unlikely events that you would have noticed if they'd happenned (of which there are surely millions), and all the possible occasions where all these unlikely events could have also occurred. When you do that, you discover that the likelihood of some unlikely thing happenning is quite high - which is in accordance with our experience that unlikely events do actually happen.

Another way of looking at it is that non-notable unlikely events happen all the time. Look, that particular car just passed me at exactly 2pm ! Most are not noticable. But sometimes we notice that a particular unlikely event just occurred, and of course it causes us to sit up and take notice. The question is how many other unlikely events you would also have noticed.

The key rational skill here is noticing the actual size of the set of unlikely things that might have happenned, and would have caught our attention if they had.

Comment author: RobinZ 08 August 2010 04:42:46PM *  3 points [-]

The concept of inferential distance is good. You wouldn't want to introduce it in the context of explaining something complicated - you'd just sound self-serving - but it'd be a good thing to crack out when people complain about how they just can't understand how anyone could believe $CLAIM.

Edit: It's also a useful concept when you are thinking about teaching.

Comment author: Larks 09 August 2010 10:12:03PM 2 points [-]

I'm going to be running a series of Rationality & AI seminars with Alex Flint in the Autumn, where we'll introduce aspiring rationalists to new concepts in both fields; standard cognitive biases, a bit of Bayesianism, some of the basic problems with both AI and Friendliness. As such, this could be a very helpful thread.

We were thinking of introducing Overconfidence Bias; ask people to give 90% confidence intervals, and then reveal (surprise surprise!) that they're wrong half the time.

Comment author: sketerpot 10 August 2010 02:31:41AM 2 points [-]

Since it seemed like this could be helpful, I expanded this into a top-level post.

That 90% confidence interval thing sounds like one hell of a dirty trick. A good one, though.

Comment author: RobinZ 08 August 2010 02:41:20AM *  2 points [-]

#3 is a favorite of mine, but I like #1 too.

How about "Your intuitions are not magic"? Granting intuitions the force of authority seems to be a common failure mode of philosophy.

Comment author: Alexandros 06 August 2010 08:31:31AM 4 points [-]
Comment author: gwern 05 August 2010 10:08:51AM *  4 points [-]

One little anti-akrasia thing I'm trying is editing my crontab to periodically pop up an xmessage with a memento mori phrase. It checks that my laptop lid is open, gets a random integer and occasionally pops up the # of seconds to my actuarial death (gotten from Death Clock; accurate enough, I figure):

 1,16,31,46 * * * * if grep open /proc/acpi/button/lid/LID0/state; then if [ $((`date \+\%\s` % 6)) = 1 ]; then xmessage "$(((`date --date="9 August 2074" \+\%\s` - `date \+\%\s`) / 60)) minutes left to live. Is what you are doing important?"; fi; fi

(I figure it's stupid enough a tactic and cheap enough to be worth trying. This shell stuff works in both bash and dash/sh, however, you probably want to edit the first conditional, since I'm not sure Linux puts the lid data at the same place in /proc/acpi in every system.)

Comment author: simplicio 05 August 2010 12:30:06AM *  4 points [-]

An amusing case of rationality failure: Stockwell Day, a longstanding albatross around Canada's neck, says that more prisons need to be built because of an 'increase in unreported crime.'

As my brother-in-law amusingly noted on FB, quite apart from whether the actual claim is true (no evidence is forthcoming), unless these unreported crimes are leading to unreported trials and unreported incarcerations, it's not clear why we would need more prisons.

Comment author: NancyLebovitz 04 August 2010 07:37:53AM 4 points [-]

I think one of the other reasons many people are uncomfortable with cryonics is that they imagine their souls being stuck-- they aren't getting the advantages of being alive or of heaven.

Comment author: Nisan 06 August 2010 06:00:55PM 4 points [-]

In all honesty, I suspect another reason people are uncomfortable with cryonics is that they don't like being cold.

Comment author: [deleted] 02 August 2010 10:32:40AM *  4 points [-]

I’m not yet good enough at writing posts to actually properly post something but I hoped that if I wrote something here people might be able to help me improve. So obviously people can comment however they normally would but it would be great if people would be willing to give me the sort of advice that would help me to write a better post next time. I know that normal comments do this to some extent but I’m also just looking for the basics – is this a good enough topic to write a post on but not well enough executed (therefore, I should work on my writing). Is it not a good enough topic? Why not? Is it not in depth enough? And so on.

Is your graph complete?

The red gnomes are known to be the best arguers in the world. If you asked them whether the only creature that lived in the Graph Mountains was a Dwongle, they would say, “No, because Dwongles never live in mountains.”

And this is true, Dwongles never live in mountains.

But if you want to know the truth, you don’t talk to the red gnomes, you talk to the green gnomes who are the second best arguers in the world.

And they would say. “No, because Dwongles never live in mountains.”

But then they would say, “Both we and the red gnomes are so good at arguing that we can convince people that false things are true. Even worse though, we’re so good that we can convince ourselves that false things are true. So we always ask if we can argue for the opposite side just as convincingly.”

And then, after thinking, they would say, “We were wrong, they must be Dwongles, for only Dwongles ever live in places where no other creatures live. So we have a paradox and paradoxes can never be resolved by giving counter examples to one or the other claim. Instead of countering, you must invalidate one of the arguments.”

Eventually, they would say, “Ah. My magical fairy mushroom has informed me that Graph Mountain is in fact a hill, ironically named, and Dwongles often live in hills. So yes, the creature is a Dwongle.”

The point of all of that is best discussed after introducing a method of diagramming the reasoning made by the green gnomes. The following series of diagrams should be reasonably self explanatory. A is a proposition that we want to know the truth of (the creature in the Graph Mountains a Dwongle) and not-A is its negation (the creature in the Graph Mountains is not a Dwongle). If a path is drawn between a proposition and the “Truth” box, then the proposition is true. Paths are not direct but go through a proof (in this case P1 stands in for “Dwongles never live in mountains” and P2 stands in for “Only Dwongles live in a place where no other creatures live). The diagrams connect to the argument made above by the green gnome. First, we have the argument that it mustn’t be a Dwongle because of P1. The second diagram shows the green gnome realising that they have an argument that it must be a Dwongle too due to P2. This middle type of diagram could be called a “Paradox Diagram.”

Figure 1

Figure 1. The green gnomes process of argument.

In his book, Good and Real, Gary Drescher notes that paradoxes can’t be resolved by making more counterarguments (which would relate to the approach shown in figure 2 before, which when considered graphically is obviously not helpful, we still have both propositions being shown to be true) but rather, by invalidating one of the arguments. That’s what the green gnomes did when they realised that Graph Mountain was actually a hill and that’s what the final diagram in figure 1 shows the result of (when you remove a vertex, like P1, you remove all the lines connected to it as well).

Figure 2

Figure 2. Attempting to resolve a paradox via counter arguments rather than invalidation.

The interesting thing in all of this is that the first and third diagrams in figure 1 look very similar. In fact, they’re the same but simply with different propositions proven. And this raises something: It can be very difficult to tell the difference between an incomplete paradox diagram and a completed proof diagram. The difference between the two is whether you’ve tried to find an argument for the opposite of the proposition proven and, if you do find one, whether you’ve managed to invalidate that argument.

What this means is, if you’re not confident that your proof for a proposition is true, you can’t be sure that you’ve taken all of the appropriate steps to establish its truth until you’ve asked: Is my graph complete?

Comment author: Yoreth 02 August 2010 06:33:00AM 4 points [-]

Suppose you know from good sources that there is going to be a huge catastrophe in the very near future, which will result in the near-extermination of humanity (but the natural environment will recover more easily). You and a small group of ordinary men and women will have to restart from scratch.

You have a limited time to compile a compendium of knowledge to preserve for the new era. What is the most important knowledge to preserve?

I am humbled by how poorly my own personal knowledge would fare.

Comment author: JoshuaZ 02 August 2010 12:52:46PM *  8 points [-]

I suspect that people are overestimating in their replies how much could be done with Wikipedia. People in general underestimate a) how much technology requires bootstrapping (metallurgy is a great example of this) b) how much many technologies, even primitive ones, require large populations so that specialization, locational advantages and comparative advantage can kick in (People even in not very technologically advanced cultures have had tech levels regress when they settle large islands or when their locations get cut off from the mainland. Tasmania is the classical example of this. The inability to trade with the mainland caused large drops in tech level). So while Wikipedia makes sense, it would also be helpful to have a lot of details on do-it-yourself projects that could use pre-existing remnants of existing technology. There are a lot of websites and books devoted to that topic, so that shouldn't be too hard.

If we are reducing to a small population, we may need also to focus on getting through the first one or two generations with an intact population. That means that a handful of practical books on field surgery, midwifing, and similar basic medical issues may become very necessary.

Also, when you specify "ordinary men and women" do you mean who all speak the same language? And do you mean by "ordinary" roughly developed world countries? That's what many people seem to mean when questions like this are proposed. They could alter things considerably. For example, if it really is a random sample, then inter-language dictionaries will be very important. But, if the sample involves some people from the developing world, they are more likely to have some of the knowledge base for working in a less technologically advanced situation that people in the developed world will lack (even this may only be true to a very limited extent because the tech level of the developing world is in many respects very high compared to the tech level of humans for most of human history. Many countries described as developing world are in better shape than for example much of Europe in the Middle Ages.)

Comment author: arundelo 02 August 2010 02:55:02PM 3 points [-]

how much technology requires bootstrapping (metallurgy is a great example of this)

I would love to see a reality TV show about a metallurgy expert making a knife or other metal tool from scratch. The expert would be provided food and shelter but would have no equipment or materials for making metal, and so would have to find and dig up the ore themselves, build their own oven, and whatever else you would have to do to make metal if you were transported to the stone age.

Comment author: RobinZ 02 August 2010 05:42:12PM 2 points [-]

One problem you would face with such a show is if the easily-available ore is gone.

Comment author: arundelo 13 June 2011 05:19:58AM 1 point [-]
Comment author: KrisC 02 August 2010 08:23:33PM 4 points [-]

Maps.

Locations of pre-disaster settlements to be used as supply caches. Locations of structures to be used for defense. Locations of physical resources for ongoing exploitation: water, fisheries, quarries. Locations of no travel zones to avoid pathogens.

Comment author: RobinZ 02 August 2010 11:47:30AM *  3 points [-]

In rough order of addition to the corpus of knowledge:

  1. The scientific method.

  2. Basic survival skills (e.g. navigation).

  3. Edit: Basic agriculture (e.g. animal husbandry, crop cultivation).

  4. Calculus.

  5. Classical mechanics.

  6. Basic chemistry.

  7. Basic medicine.

  8. Basic political science.

Comment author: NancyLebovitz 02 August 2010 03:20:25PM 6 points [-]

Basic sanitation!

Comment author: jimrandomh 02 August 2010 06:14:48PM 2 points [-]

Presupposing that only a limited amount of knowledge could be saved seems wrong. You could bury petabytes of data in digital form, then print out a few books' worth of hints for getting back to the technology level necessary to read it.

Comment author: [deleted] 02 August 2010 06:40:46AM 2 points [-]

A dead tree copy of Wikipedia. A history book about ancient handmade tools and techniques from prehistory to now. A bunch of K-12 school books about math and science. Also as many various undergraduate and postgraduate level textbooks as possible.

Comment author: JanetK 02 August 2010 11:30:32AM 4 points [-]

Wikipedia is a great answer because we know that most but no all the information is good. Some is nonsense. This will force the future generations to question and maybe develop their own 'science' rather than worship the great authority of 'the old and holy books'.

Comment author: JoshuaZ 02 August 2010 12:56:00PM 2 points [-]

The knowledge about science issues generally tracks our current understanding very well. And historical knowledge that is wrong will be extremely difficult for people to check post an apocalyptic event, and even then is largely correct. In fact, if Wikipedia's science content really were bad enough to matter it would be an awful thing to bring into this situation since having correct knowledge or not could alter whether or not humanity survives at all.

Comment author: Oscar_Cunningham 02 August 2010 11:43:05AM 3 points [-]

Wikipedia would also contain a lot of info about current people and places, which would no longer be remotely useful.

Comment author: sketerpot 02 August 2010 07:18:19AM 2 points [-]

A dead-tree copy of Wikipedia has been estimated at around 1,420 volumes. Here's an illustration, with a human for scale. It's big. You might as well go for broke and hole up in a library when the Big Catastrophe happens.

Comment author: mstevens 02 August 2010 11:03:25AM 2 points [-]

One of these http://thewikireader.com/ with rechargeable batteries and a solar charger could work.

Comment author: NihilCredo 02 August 2010 06:52:01PM 3 points [-]

Until some critical part oxidates or otherwise breaks. Which will likely be a long time before the new society is able to build a replacement.

Comment author: andreas 01 August 2010 10:35:55PM 4 points [-]
Comment author: Pavitra 25 August 2010 02:11:16AM *  3 points [-]

There's an idea I've seen around here on occasion to the effect that creating and then killing people is bad, so that for example you should be careful that when modeling human behavior your models don't become people in their own right.

I think this is bunk. Consider the following:

--

Suppose you have an uploaded human, and fork the process. If I understand the meme correctly, this creates an additional person, such that killing the second process counts as murder.

Does this still hold if the two processes are not made to diverge; that is, if they are deterministic (or use the same pseudorandom seed) and are never given differing inputs?

Suppose that instead of forking the process in software, we constructed an additional identical computer, set it on the table next to the first one, and copied the program state over. Suppose further that the computers were cued up to each other so that they were not only performing the same computation, but executing the steps at the same time as each other. (We won't readjust the sync on an ongoing basis; it's just part of the initial conditions, and the deterministic nature of the algorithm ensures that they stay in step after that.)

Suppose that the computers were not electronic, but insanely complex mechanical arrays of gears and pulleys performing the same computation -- emulating the electronic computers at reduced speed, perhaps. Let us further specify that the computers occupy one fewer spatial dimension than the space they're embedded in, such as flat computers in 3-space, and that the computers are pressed flush up against each other, corresponding gears moving together in unison.

What if the corresponding parts (which must be staying in synch with each other anyway) are superglued together? What if we simply build a single computer twice as thick? Do we still have two people?

--

No, of course not. And, on reflection, it's obvious that we never did: redundant computation is not additional computation.

So what if we cause the ems to diverge slightly? Let us stipulate that we give them some trivial differences, such as the millisecond timing of when they receive their emails. If they are not actively trying to diverge, I anticipate that this would not have much difference to them in the long term -- the ems would still be, for the most part, the same person. Do we have two distinct people, or two mostly redundant people -- perhaps one and a tiny fraction, on aggregate? I think a lot of people will be tempted to answer that we have two.

But consider, for a moment, if we were not talking about people but -- say -- works of literature. Two very similar stories, even if by a raw diff they share almost no words, are of not much more value than only one of them.

The attitude I've seen seems to treat people as a special case -- as a separate magisterium.

--

I wish to assert that this value system is best modeled as a belief in souls. Not immortal souls with an afterlife, you understand, but mortal souls, that are created and destroyed. And the world simply does not work that way.

If you really believed that, you'd try to cause global thermonuclear war, in order to prevent the birth of billions or more of people who will inevitably be killed. It might take the heat death of the universe, but they will die.

Comment author: ata 25 August 2010 03:09:19AM *  2 points [-]

You make good points. I do think that multiple independent identical copies have the same moral status as one. Anything else is going to lead to absurdities like those you mentioned, like the idea of cutting a mechanical computer in half and doubling its moral worth.

I have for a while had a feeling that the moral value of a being's existence has something to do with the amount of unique information generated by its mind, resulting from its inner emotional and intellectual experience. (Where "has something to do with" = it's somewhere in the formula, but not the whole formula.) If you have 100 identical copies of a mind, and you delete 99 of them, you have not lost any information. If you have two slightly divergent copies of a mind, and you delete one of them, then that's bad, but only as bad as destroying whatever information exists in it and not the other copy. Abortion doesn't seem to be a bad thing (apart from any pain caused; that should still be minimized) because a fetus's brain contains almost no information not compressible to its DNA and environmental noise, neither of which seems to be morally valuable. Similar with animals; it appears many animals have some inner emotional and intellectual experience (to varying degrees), so I consider deleting animal minds and causing them pain to have terminal negative value, but not nearly as great as doing the same to humans. (I also suspect that a being's value has something to do with the degree to which its mind's unique information is entangled with and modeled (in lower resolution) by other minds, à la I Am A Strange Loop.)

Comment author: Sewing-Machine 24 August 2010 04:06:24AM 3 points [-]

Some hobby Bayesianism. A typical challenge for a rationalist is that there is some claim X to be evaluated, it seems preposterous, but many people believe it. How should you take account of this when considering how likely X is to be true? I'm going to propose a mathematical model of this situation and discuss two of it's features.

This is based on a continuing discussion with Unknowns, who I think disagrees with what I'm going to present, or with its relevance to the "typical challenge."

Summary: If you learn that a preposterous hypothesis X is believed by many people, you should not correct your prior probability P(X) by a factor larger than the reciprocal of P(Y), your prior probability for the hypothesis Y = "X is believed by many people." One can deduce an estimate of P(Y) from an estimate of the quantity "if I already knew that at least n people believed X, how likely it would be that n+1 people believed X" as a function of n. It is not clear how useful this method of estimating P(Y) is.

The right way to unpack "X seems preposterous, but many believe it" mathematically is as follows. We have a very low prior probability P(X), and then we have new evidence Y = "many people believe X". The problem is to evaluate P(X|Y).

One way to phrase the typical challenge is "How much larger than P(X) should P(X|Y) be?" In other words, how large is the ratio P(X|Y)/P(X)? Bayes formula immediately says something interesting about this:

P(X|Y)/P(X) = P(Y|X)/P(Y)

Moreover, since P(Y|X) < 1, the right-hand side of that equation is less than 1/P(Y). My interpretation of this: if you want to know how seriously to take the fact that many people believe something, you should consider how likely you find it that many people would believe it absent any evidence. Or a little more precisely, how likely you find it that many people would believe it if the amount of evidence available to them was unknown to you. You should not correct your prior for X by more than the reciprocal of this probability.

Comment: how much less than 1 P(Y|X) is depends on the nature of X. For instance, if X is the claim "the Riemann hypothesis is false" then it is unclear to me how to estimate P(Y|X), but (since it is conceivable to me that RH is false, but still it is widely believed) it might be quite small. If X is an everyday claim like "it's a full moon tomorrow", or a spectacular claim like "Jesus rose from the dead", it seems like P(Y|X) is very close to 1. So sometimes 1/P(Y) is a good approximation to P(X|Y)/P(X), but maybe sometimes it is a big overestimation.

What about P(Y)? Is there a way to estimate it, or at least approach its estimation? Let's give ourselves a little more to work with, by quantifying "many people" in "many people believe X". Let Y(n) be the assertion "at least n people believe X." Note that this model doesn't specify what "believe" means -- in particular it does not specify how strongly n people believe X, nor how smart or expert those n people are, nor where in the world they are located... if there is a serious weakness in this model it might be found here.

Another application of Bayes theorem gives us

P(Y(n+1))/P(Y(n)) = P(Y(n+1)|Y(n))

(Since P(Y(n)|Y(n+1)) = 1, i.e. if we know n+1 people believe X, then of course n people believe X). Squinting a little, this gives us a formula for the derivative of the logarithm of P(Y(n)). Yudkowsky has suggested naming the log of a probability an "absurdity," let's write A(Y(n)) for the absurdity of Y(n).

d/dn A(Y(n)) = A(Y(n+1)|Y(n))

So up to an additive constant A(Y(n)) is the integral from 1 to n of A(Y(m+1)|Y(m))dm. So an ansatz for P(Y(n+1)|Y(n)) = exp(A(Y(n+1)|Y(n)) will allow us to say something about P(Y(n)), up to a multiplicative constant.

The shape of P(Y(n+1)|Y(n)) seems like it could have a lot to do with what kind of statement X is, but there is one thing that seems likely to be true no matter what X is: if N is the total population of the world and n/N is close to zero, then P(Y(n+1)|Y(n)) is also close to zero, and if n/N is close to one then P(Y(n+1)|Y(n)) is also close to one. I might work out an example ansatz like this in a future comment, if this one stands up to scrutiny.

Comment author: [deleted] 11 August 2010 06:38:26AM *  3 points [-]

Where should the line be drawn regarding the status of animals as moral objects/entities? E.G Do you think it is ethical to boil lobsters alive? It seems to me there is a full spectrum of possible answers: at one extreme only humans are valued, or only primates, only mammals, only veterbrates, or at the other extreme, any organism with even a rudimentary nervous system (or any computational, digital isomorphism thereof), could be seen as a moral object/entity.

Now this is not necessarily a binary distinction, if shrimp have intrinsic moral value it does not follow that they must have a equal value to humans or other 'higher' animals. As I see it, there are two possibilities; either we come to a point where the moral value drops to zero, or else we decide that entities approach zero to some arbitrary limit: e.g. a c. elegans roundworm with its 300 neurons might have a 'hedonic coefficient' of 3x10^-9. I personally favor the former, the latter just seems absurd to me, but I am open to arguments or any comments/criticisms.

Comment author: Johnicholas 09 August 2010 11:08:26AM 3 points [-]

Say a "catalytic pattern" is something like scaffolding, an entity that makes it easier to create (or otherwise obtain) another entity. An "autocatalytic pattern" is a sort of circular version of that, where the existence of an instance of the pattern acts as scaffolding for creating or otherwise obtaining another entity.

Autocatalysis is normally mentioned in the "origin of life" scientific field, but it also applies to cultural ratchets. An autocatalytic social structure will catalyze a few more instances of itself (frequently not expanding without end - rather, a niche is filled), and then the population has some redundancy and recoverability, acting as a ratchet.

For example, driving on the right(left) in one region catalyzes driving on the right(left) in an adjacent region.

Designing circular or self-applicable entities is kindof tricky, but it's not as tricky as it might be - often, theres an attraction basin around a hypothesized circular entity, where X catalyzes Y which is very similar to X, and Y catalyzes Z which is very similar to Y, and so focusing your search sufficiently, and then iterating or iterating-and-tweaking can often get the last, trickiest steps.

Douglas Hofstadter catalyzed the creation (by Lee Sallows) of a "Pangram Machine" that exploits this attraction basin to create a self-describing sentence that starts "This Pangram contains four as, [...]" - see http://en.wikipedia.org/wiki/Pangram

Has there been any work on measuring, studying attraction basins around autocatalytic entities?

Comment author: NancyLebovitz 08 August 2010 10:44:38PM 3 points [-]

Would people be interested in a place on LW for collecting book recommendations?

I'm reading The Logic of Failure and enjoying it quite a bit. I wasn't sure whether I'd heard of it here, and I found Great Books of Failure, an article which hadn't crossed my path before.

There's a recent thread about books for a gifted young tween which might or might not get found by someone looking for good books..... and so on.

Would it make more sense to have a top level article for book recommendations or put it in the wiki? Or both?

Comment author: Yoreth 08 August 2010 05:09:32PM 3 points [-]

I think I may have artificially induced an Ugh Field in myself.

A little over a week ago it occurred to me that perhaps I was thinking too much about X, and that this was distracting me from more important things. So I resolved to not think about X for the next week.

Of course, I could not stop X from crossing my mind, but as soon as I noticed it, I would sternly think to myself, "No. Shut up. Think about something else."

Now that the week's over, I don't even want to think about X any more. It just feels too weird.

And maybe that's a good thing.

Comment author: Cyan 08 August 2010 05:48:00PM *  3 points [-]

I have also artificially induced an Ugh Field in myself. A few months ago, I was having a horrible problem with websurfing procrastination. I started using Firefox for browsing and LeechBlock to limit (but not eliminate) my opportunities for websurfing instead of doing work. I'm on a Windows box, and for the first three days I disabled IE, but doing so caused knock-on effects, so I had to re-enable it. However, I knew that resorting to IE to surf would simply recreate my procrastination problem, so... I just didn't. Now, when the thought occurs to me to do so, it auto-squelches.

Comment author: Unknowns 08 August 2010 06:35:53PM *  5 points [-]

I predict with 95% confidence that within six months you will have recreated your procrastination problem with some other means.

Comment author: Cyan 09 August 2010 08:04:26PM 5 points [-]

Your lack of confidence in me has raised my ire. I will prove you wrong!

Comment author: Unknowns 09 August 2010 08:07:17PM 3 points [-]

To be settled by February 8, 2011!

Comment author: Unknowns 08 February 2011 03:24:58PM 2 points [-]

Did you start procrastinating again?

Comment author: Cyan 09 February 2011 03:36:08PM *  1 point [-]

Yep. Eventually I sought medical treatment.

Comment author: knb 06 August 2010 03:43:10AM 3 points [-]

Does anyone have any book recommendations for a gifted young teen? My nephew is 13, and he recently blew the lid off of a school-administered IQ test.

For his birthday, I want to give him some books that will inspire him to achieve great things and live a happy life full of hard work. At the very least, I want to give him some good math and science books. He has already has taken algebra, geometry and introductory calculus, so he knows some math already.

Comment author: cousin_it 06 August 2010 07:02:38PM *  17 points [-]

Books are not enough. Smart kids are lonely. Get him into a good school (or other community) where he won't be the smartest one. That happened to me at 11 when I was accepted into Russia's best math school and for the first time in my life I met other people worth talking to, people who actually thought before saying words. Suddenly, to regain my usual position of the smart kid, I had to actually work hard. It was very very important. I still go to school reunions every year, even though I finished it 12 years ago.

Comment author: Wei_Dai 06 August 2010 08:32:43PM 5 points [-]

Alternatively, not having any equally smart kids to talk to will force him to read books and/or go online for interesting ideas and conversation. I don't think I had any really interesting real-life conversations until college, when I did an internship at Microsoft Research, and I'd like to think that I turned out fine.

My favorite book, BTW, is A Fire Upon the Deep. But one of the reasons I like it so much is that I was heavily into Usenet when I first read it, and I'm not sure that aspect of the book will resonate as much today. (I was determined to become a one-man Sandor Arbitration Intelligence. :)

Comment author: orthonormal 06 August 2010 07:16:48PM 2 points [-]

Seconded. Whether he's exposed to a group of people who think ideas can be cool could be the biggest influence on him for the rest of his life.

Comment author: Risto_Saarelma 06 August 2010 08:05:24AM 9 points [-]

Forum favorite Good and Real looks reasonably accessible to me, and covers a lot of ground. Also seconding Gödel, Escher Bach.

The Mathematical Experience has essays about doing mathematics, written by actual mathematicians. It seems like very good reading for someone who might be considering studying math.

The Road to Reality has Roger Penrose trying to explain all of modern physics and the required mathematics without pulling any punches and starting from grade school math in a single book. Will probably cause a brain meltdown at some point on anyone who doesn't already know the stuff, but just having a popular science style book that nevertheless goes on to explain the general theory of relativity without handwaving is pretty impressive. Doesn't include any of Penrose's less fortunate forays into cognitive science and AI.

Darwin's Dangerous Idea by Daniel Dennett explains how evolution isn't just something that happens in biology, but how it turns up in all sorts of systems.

Armchair Universe and old book about "computer recreations", probably most famous is the introduction of the Core War game. The other topics are similar, setting up an environment with a simple program that has elaborate emergent behavior coming out of it. Assumes the reader might actually program the recreations themselves, and provides appropriate detail.

Surely You're Joking, Mr. Feynman is pretty much entertainment, but still very good. Feynman is still the requisite trickster-god patron saint of math and science.

Code: The Hidden Language of Computer Hardware and Software explains how computers are put together, starting from really concrete first principles (flashing Morse code with flashlights, mechanical relay circuits) and getting up to microprocessors, RAM and executable program code.

Comment author: orthonormal 06 August 2010 07:12:46PM 2 points [-]

Good and Real is superb, but really too dry for a 13-year-old. I'd wait on that one.

Surely You're Joking is also fantastic, but get it read and approved by your nephew's parents first; there's a few sexual stories with a hint of a PUA worldview.

Comment author: Kevin 06 August 2010 03:59:10AM 7 points [-]

Godel Escher Bach!

Comment author: Soki 07 August 2010 05:07:15AM *  4 points [-]

knb, does your nephew know about lesswrong, rationality and the Singularity? I guess I would have enjoyed reading such a website when I was a teenager.

When it comes to a physical book, Engines of Creation by Drexler can be a good way to introduce him to nanotechnology and what science can make happen. (I know that nanotech is far less important that FAI, but I think it is more "visual" : you can imagine those nanobots manufacturing stuff or curing diseases, while you cannot imagine a hard takeoff).
Teenagers need dream.

Comment author: knb 07 August 2010 06:13:42AM *  2 points [-]

My sister and brother-in-law are both semi-religious theists, so I'm a bit reluctant to introduce him to anything as hardcore-atheist as Less Wrong, at least right now. Going through that huge theist-to-atheist identity transition can be really traumatic. I think it would be better if he was a bit older before he had confront those ideas.

I was 16 before I really allowed myself to accept that I didn't believe in God, and that was still a major crisis for me. If he starts getting into hardcore rationality material this early, I'm afraid it could force a choice between rationality and wishful thinking that he may not be ready to make.

Comment author: Interpolate 07 August 2010 06:58:08AM *  2 points [-]

If he is gifted and interested in science, introducing him to lesswrong, rationality and the Singularity could have a substantial positive impact on his academic development. What would be the worst that could happen?

Comment author: knb 07 August 2010 08:19:17AM 5 points [-]

My concern is not just that it would be traumatic, but that it will be so traumatic that he'll rationalize himself into a "belief in belief" situation. I had my crisis of faith when I was close to his age (14) and I wasn't ready to accept something that would alienate me from my family yet, so I simply told myself that I believed, and tried not to think about the issue. (I suspect this is why most people don't come out as atheists until after they've established separate identities from their parents and families.

A lot of people never escape from these traps. I think waiting somewhat--until he's somewhat older and more mature--will make him more likely to come to the right conclusions in the end.

Comment author: MartinB 09 August 2010 09:20:11PM 2 points [-]

The Heinlein Juveniles. 'have space suit will travel' and others have the whole self-reliance, work hard and achieve things strongly ingrained. I cannot judge how well the integrate with your current culture, but in the 50s they sold well, and still do. But those are not specific for über-bright kids, more for the normal bright types. If he hasn't done so yet, just introducing him to the next big library might help a lot.

Comment author: RobinZ 06 August 2010 03:53:45AM 2 points [-]

My dad's been trying to get me to read the Feynman Lectures for ages - the man's a good writer if your nephew would be interested by physics.

Comment author: jimmy 06 August 2010 12:18:00AM 3 points [-]

Does anyone know where the page that used to live here can be found?

It was an experiment where two economists were asked to play 100 turn asymmetric prisoners dilemma with communication on each turn to the experimenters, but not each other.

It was quite amusing in that even though they were both economists and should have known better, the guy on the 'disadvantaged' side was attempting to have the other guy let him defect once in a while to make it "fair".

Comment author: Douglas_Knight 06 August 2010 04:07:36AM *  2 points [-]
Comment author: gwern 06 August 2010 04:02:42AM 2 points [-]
Comment author: gwern 05 August 2010 05:12:10AM 3 points [-]

"CIA Software Developer Goes Open Source, Instead":

"Burton, for example, spent years on what should’ve been a straightforward project. Some CIA analysts work with a tool, “Analysis of Competing Hypotheses,” to tease out what evidence supports (or, mostly, disproves) their theories. But the Java-based software is single-user — so there’s no ability to share theories, or add in dissenting views. Burton, working on behalf of a Washington-area consulting firm with deep ties to the CIA, helped build on spec a collaborative version of ACH. He tried it out, using the JonBenet Ramsey murder case as a test. Burton tested 51 clues — the lack of a scream, evidence of bed-wetting — against five possible culprits. “I went in, totally convinced it all pointed to the mom,” Burton says. “Turns out, that wasn’t right at all.”"

Comment author: Rain 09 August 2010 08:04:33PM *  3 points [-]

Far more interesting than the software is the chapter in the CIA book Psychology of Intelligence Analysis where they describe the method:

Analysis of competing hypotheses, sometimes abbreviated ACH, is a tool to aid judgment on important issues requiring careful weighing of alternative explanations or conclusions. It helps an analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult to achieve.

ACH is an eight-step procedure grounded in basic insights from cognitive psychology, decision analysis, and the scientific method. It is a surprisingly effective, proven process that helps analysts avoid common analytic pitfalls. Because of its thoroughness, it is particularly appropriate for controversial issues when analysts want to leave an audit trail to show what they considered and how they arrived at their judgment.

Summary and conclusions:

Three key elements distinguish analysis of competing hypotheses from conventional intuitive analysis.

  • Analysis starts with a full set of alternative possibilities, rather than with a most likely alternative for which the analyst seeks confirmation. This ensures that alternative hypotheses receive equal treatment and a fair shake.
  • Analysis identifies and emphasizes the few items of evidence or assumptions that have the greatest diagnostic value in judging the relative likelihood of the alternative hypotheses. In conventional intuitive analysis, the fact that key evidence may also be consistent with alternative hypotheses is rarely considered explicitly and often ignored.
  • Analysis of competing hypotheses involves seeking evidence to refute hypotheses. The most probable hypothesis is usually the one with the least evidence against it, not the one with the most evidence for it. Conventional analysis generally entails looking for evidence to confirm a favored hypothesis.
Comment author: Alex_Altair 03 August 2010 08:53:48PM 3 points [-]

What's the policy on User pages in the wiki? Can I write my own for the sake of people having a reference when they reply to my posts, or are they only for somewhat accomplished contributers?

Comment author: Blueberry 04 August 2010 01:09:41AM 3 points [-]

I can't imagine any reason why it would be a problem to make a User page. Go ahead.

Comment author: WrongBot 04 August 2010 12:53:55AM 2 points [-]

I haven't seen any sort of policy articulated. I just sort of went for it, and haven't gotten any complaints yet. Personally, I'd love to see more people with wiki user pages, since the LW site itself doesn't have much in the way of profile features.

Comment author: timtyler 02 August 2010 06:16:31PM *  3 points [-]

I made some comments on the recently-deleted threads that got orphaned when the whole topic was banned and the associated posts were taken down. Currently no-one can reply to the comments. They don't related directly to the banned subject matter - and some of my messages survive despite the context being lost.

Some of the comments were SIAI-critical - and it didn't seem quite right to me at the time for the moderator to crush any discussion about them. So, I am reposting some of them as children of this comment in an attempt to rectify things - so I can refer back to them, and so others can comment - if they feel so inclined:

Comment author: timtyler 02 August 2010 06:17:15PM 6 points [-]

[In the context of SIAI folks thinking an unpleasant AI was likely]

The SIAI derives its funding from convincing people that the end is probably nigh - and that they are working on a potential solution. This is not the type of organisation you should trust to be objective on such an issue - they have obvious vested interests.

Comment author: Johnicholas 02 August 2010 07:42:03PM 2 points [-]

I've noticed this structural vulnerability to bias too - Can you think of any structural changes that might reduce or eliminate this bias?

Maybe SIAI ought to be offering a prize for substantially justified criticism of some important positional documents, as judged by some disinterested agent?

Comment author: timtyler 02 August 2010 08:20:25PM *  3 points [-]

They are already getting some critical feedback.

I think I made much the same points in my DOOM! video. DOOM mongers:

  • tend to do things like write books about THE END OF THE WORLD - which gives them a stake in promoting the topic ...and...

  • are a self-selected sample of those who think DOOM is very important (and so, often, highly likely) - so naturally they hold extreme views - and represent a sample from the far end of the spectrum;

  • clump together, cite each others papers, and enjoy a sense of community based around their unusual views.

It seems tricky for the SIAI to avoid the criticism that they have a stake in promoting the idea of DOOM - while they are funded the way they are.

Similarly, I don't see an easy way of avoiding the criticism that they are a self-selected sample from the extreme end of a spectrum of DOOM beliefs either.

If we could independently establish p(DOOM), that would help - but measuring it seems pretty challenging.

IMO, a prize wouldn't help much - but I don't know for sure. Many people behave irrationally around prizes - so it is hard to be very confident here.

I gather they are working on publishing some positional documents. It seems to be a not-unreasonable move. If there is something concrete to criticise, critics will have something to get their teeth into.

Comment author: timtyler 02 August 2010 06:16:58PM *  3 points [-]

They used to have a "commitment" that:

"Technology developed by SIAI will not be used to harm human life."

...on their web site. I probably missed the memo about that being taken down.

Comment author: timtyler 02 August 2010 06:17:25PM *  2 points [-]

[In the context of SIAI folks thinking an unpleasant AI was likely]

Re: "The justification is that uFAI is a lot easier to make."

That seems like naive reasoning. It is a lot easier to make a random mess of ASCII that crashes or loops - and yet software companies still manage to ship working products.

Comment author: WrongBot 02 August 2010 06:26:03PM 3 points [-]

Those software companies test their products for crashes and loops. There is a word for testing an AI of unknown Friendliness and that word is "suicide".

Comment author: timtyler 02 August 2010 06:39:14PM *  4 points [-]

That just seems to be another confusion to me :-(

The argument - to the extent that I can make sense of it - is that you can't restrain an super-intelligent machine - since it will simply use its superior brainpower to escape from the constraints.

We successfully restrain intelligent agents all the time - in prisons. The prisoners may be smarter than the guards, and they often outnumber them - and yet still the restraints are usually successuful.

Some of the key observations to my mind are:

  • You can often restrain one agent with many stupider agents;
  • The restraining agents do not need to be humans - they can be other machines;
  • You can often restrain one agent with a totally dumb cage;
  • Complex systems can often be tested in small pieces (unit testing);
  • Large systems can often be tested on a smaller scale before deployment;
  • Systems can often be tested in virtual environments, reducing the cost of failure.

Discarding the standard testing-based methodology would be very silly, IMO.

Indeed, it would sabotage your project to the point that it would almost inevitably be beaten - and there is very little point in aiming to lose.

Comment author: JGWeissman 02 August 2010 09:55:45PM *  2 points [-]

software companies still manage to ship working products.

Software companies manage to ship products that do sort of what they want, that they can patch to more closely do what they want. This is generally after rounds of internal testing, in which they try to figure out if it does what they want by running it and observing the result.

But an AGI, whether FAI or uFAI, will be the last program that humans get to write and execute unsupervised. We will not get to issue patches.

Comment author: orthonormal 03 August 2010 06:03:53PM 11 points [-]

But an AGI, whether FAI or uFAI, will be the last program that humans get to write and execute unsupervised. We will not get to issue patches.

Or to put it another way, the revolution will not be beta tested.

Comment author: rwallace 02 August 2010 10:54:04PM 2 points [-]

But an AGI, whether FAI or uFAI, will be the last program that humans get to write and execute unsupervised. We will not get to issue patches.

In fiction, yes. Fictional technology appears overnight, works the first time without requiring continuing human effort for debugging and maintenance, and can do all sorts of wondrous things.

In real life, the picture is very different. Real life technology has a small fraction of the capabilities of its fictional counterpart, and is developed incrementally, decade by painfully slow decade. If intelligent machines ever actually come into existence, not only will there be plenty of time to issue patches, but patching will be precisely the process by which they are developed in the first place.

Comment author: JoshuaZ 03 August 2010 02:40:43AM 3 points [-]

I agree somewhat with this as a set of conclusions, but your argument deserves to get downvoted because you've made statements that are highly controversial. The primary issue is that, if one thinks that an AI can engage in recursive self-improvement and can do so quickly, then once there's an AI that's at all capable of such improvement, the AI will rapidly move outside our control. There are arguments against such a possibility being likely, but this is not a trivial matter. Moreover, comparing the situation to fiction is unhelpful- just because something is common in fiction that's not an argument that such a situation can't actually happen in practice. Reversed stupidity is not intelligence.

Comment author: NihilCredo 03 August 2010 03:09:04AM *  2 points [-]

your argument deserves to get downvoted because you've made statements that are highly controversial

Did you accidentally pick the wrong adjective, or did you seriously mean that controversy is unwelcome in LW comment threads?

Comment author: ata 03 August 2010 03:18:33AM *  4 points [-]

I read the subtext as "...you've made statements that are highly controversial without attempting to support them". Suggesting that there will be plenty of time to debug, maintain, and manually improve anything that actually fits the definition of "AGI" is a very significant disagreement with some fairly standard LW conclusions, and it may certainly be stated, but not as a casual assumption or a fact; it should be accompanied by an accordingly serious attempt to justify it.

Comment author: Morendil 02 August 2010 09:45:57PM 2 points [-]

It is a lot easier to make a random mess of ASCII that crashes or loops - and yet software companies still manage to ship working products.

Still, a lot of these "working products" are the output of a filtering process which starts from a random mess of ASCII that crashes or loops, and tweaks it until it's less obviously broken. (Most of the job of testing being, typically, left to the end user.)

Comment author: EStokes 02 August 2010 05:31:47PM 3 points [-]

Are there any posts people would like to see reposted? For example, Where Are We seems like it maybe should be redone, or at least put a link in About... Or so I thought, but I just checked About and the page for introductions wasn't linked, either. Huh.

Comment author: thomblake 02 August 2010 06:29:30PM 3 points [-]

It would be nice if we had profile pages with machine-readable information and an interface for simple queries so posts such as that one would be redundant.

Comment author: zaph 02 August 2010 01:03:09PM 3 points [-]

I came across a blurb on Ars Technica about "quantum memory" with the headline proclaiming that it may "topple Heisenberg's uncertainty principle". Here's the link: http://arstechnica.com/science/news/2010/08/quantum-memory-may-topple-heisenbergs-uncertainty-principle.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss

They didn't source the specific article, but it seems to be this one, published in <i>Nature Physics</i>. Here's that link: http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys1734.html

This is all well above my paygrade. Is this all conceptual? Are the scientists involed anywhere near an experiment to verify any of this? In a word, huh?

Comment author: Mass_Driver 31 August 2010 12:39:59AM 2 points [-]

It might be useful to have a short list of English words that indicate logical relationships or concepts often used in debates and arguments, so as to enable people who are arguing about controversial topics to speak more precisely.

Has anyone encountered such a list? Does anyone know of previous attempts to create such lists?

Comment author: [deleted] 28 August 2010 03:35:04PM *  2 points [-]

Followup to: Making Beliefs Pay Rent in Anticipated Experiences

In the comments section of Making Beliefs Pay Rent, Eliezer wrote:

I follow a correspondence theory of truth. I am also a Bayesian and a believer in Occam's Razor. If a belief has no empirical consequences then it could receive no Bayesian confirmation and could not rise to my subjective attention. In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

If I am interpreting this correctly, Eliezer is saying that there is a nearly infinite space of unfalsifiable hypotheses, and so our priors for each individual hypothesis should be very close to zero. I agree with this statement, but I think it raises a philosophical problem: doesn't this same reasoning apply to any factual question? Given a set of data D, there must be an nearly infinite space of hypotheses that (a) explain D and (b) make predictions (fulfilling the criteria discussed in Making Beliefs Pay Rent). Though Occam's Razor can help us to weed out a large number of these possible hypotheses, a mind-bogglingly large number would still remain, forcing us to have a low prior for each individual hypothesis. (In philosophy of science, this is known as "underdetermination.") Or is there a flaw in my reasoning somewhere?

Comment author: PaulAlmond 28 August 2010 05:37:53PM 1 point [-]

Surely, this is dealt with by considering the amount of information in the hypothesis? If we consider each hypothesis that can be represented with 1,000 bits of information, there will only be a maximum of 2^1,000 such hypotheses, and if we consider each hypothesis that can be represented with n bits of information, there will only be a maximum of 2^n - and that is before we even start eliminating hypotheses that are inconsistent with what we already know. If we favor hypotheses with less information content, then we end up with a small number of hypotheses that can be taken reasonably seriously, and the remainder being unlikely - and progressively more unlikely as n increases, so that when n is sufficiently large, we can, practically, dismiss any hypotheses.

Comment author: [deleted] 28 August 2010 09:23:10PM 1 point [-]

I agree with most of that, but why favor less information content? Though I may not fully understand the math, this recent post by cousin it seems to be saying that priors should not always depend on Kolmogorov complexity.

And, even if we do decide to favor less information content, how much emphasis should we place on it?

Comment author: PaulAlmond 28 August 2010 10:06:29PM 1 point [-]

In general, I would think that the more information is in a theory, the more specific it is, and the more specific it is, the smaller is the proportion of possible worlds which happen to comply with it.

Regarding how much emphasis we should place on it: I woud say "a lot" but there are complications. Theories aren't used in isolation, but tend to provide a kind of informally put together world view, and then there is the issue of degree of matching.

Comment author: Perplexed 28 August 2010 10:16:48PM 4 points [-]

Which theory has more information?

  • All crows are black
  • All crows are black except <270 pages specifying the exceptions>
Comment author: Snowyowl 25 August 2010 06:46:44PM 2 points [-]

Here's a thought experiment that's been confusing me for a long time, and I have no idea whether it is even possible to resolve the issues it raises. It assumes that a reality which was entirely simulated on a computer is indistinguishable from the "real" one, at least until some external force alters it. So... the question is, assuming that such a program exists, what happens to the simulated universe when it is executed?

In accordance with the arguments that Pavirta gives below me, redundant computation is not the same as additional computation. Executing the same program twice (with the same inputs each time) is equivalent to executing it once, which is equivalent to executing it five times, ten times, or a million. You are just simulating the same universe over and over, not a different one each time.

But is running the simulation once equivalent to running it ZERO times?

The obvious answer seems to be "no", but bear with me here. There is nothing special about the quarks and leptons that make up a physical computer. If you could make a Turing machine out of light, or more exotic matter, you would still be able to execute the same program on it. And if you could make such a computer in any other universe (whatever that might mean), you would still be able to run the program on it. But in such considerations, the computer used is immaterial. A physical computer is not a perfect Turing machine - it has finite memory space and is vulnerable to physical defects which introduce errors into the program. What matters is the program itself, which exists regardless of the computer it is on. A program is a Platonic ideal, a mathematical object which cannot exist in this universe. We can make a representation of that program on a computer, but the representation is not perfect, and it is not the program itself. In the same way, a perfect equilateral triangle cannot actually be constructed in this universe; even if you use materials whose length is measured down to the atom, its sides will not be perfectly straight and its angles will not be perfectly equal. More importantly, if you then alter the representation to make one of the angles bigger, it does not change the fact that equilateral triangles have 60° angles, it simply makes your representation less accurate. In the same way, executing a program on a computer will not alter the program itself. If there are conscious beings simulated on your computer, they existed before you ran the program, and they will exist even if you unplug the computer and throw it into a hole - because what you have in your computer is not the conscious beings, but a representation of them. And they will still exist even if you never run the program, or even if it never occurs to anyone on Earth that such a program could be made.

The problem is, this same argument could be used to justify the existence of literally everything, everywhere. So we are left with several possible conclusions: (1)Everything is "real" in some universe, and we have no way of ever finding such universes. This cannot ever be proved or falsified, and also leads to problems with the definition of "everything" and "real". (2)The initial premise is false, and only physical objects are real: simulations, thoughts and constructs are not. I think there is a philosophical school of thought that believes this to be true, though I have no idea what its name is. Regardless, there are still a lot of holes in this answer. (3)I have made a logical mistake somewhere, or I am operating from an incorrect definition of "real". It happens.

It is also worth pointing out that both (1) and (2) invalidate every ethical truth in the book, since in (1) there is always a universe in which I just caused the death of a trillion people, and in (2) there is no such thing as "ethics" - ideas aren't real, and that includes philosophical ideas.

Anyway, just bear this in mind when you think about a universe being simulated on a computer.

Comment author: Emile 25 August 2010 07:17:53PM *  2 points [-]

(1)Everything is "real" in some universe, and we have no way of ever finding such universes. This cannot ever be proved or falsified, and also leads to problems with the definition of "everything" and "real".

That's pretty much Tegmark's Multiverse, which seems pretty popular around here (I think it makes a lot of sense).

Comment author: wedrifid 24 August 2010 03:15:38AM 2 points [-]

Eliezer has written a post (ages ago) which discussed a bias when it comes to contributions to charities. Fragments that I can recall include considering the motivation for participating in altruistic efforts in a tribal situation, where having your opinion taking seriously is half the point of participation. This is in contrast to donating 'just because you want thing X to happen'. There is a preference to 'start your own effort, do it yourself' even when that would be less efficient than donating to an existing charity.

I am unable to find the post in question - I think it is distinct from 'the unit of caring'. It would be much appreciated if someone who knows the right keywords could throw me a link!

Comment author: WrongBot 25 August 2010 12:33:34AM 4 points [-]
Comment author: NQbass7 20 August 2010 07:04:52PM 2 points [-]

Alright, I've lost track of the bookmark and my google-fu is not strong enough with the few bits and pieces I remember. I remember seeing a link to a story in a lesswrong article. The story was about a group of scientists who figured out how to scan a brain, so they did it to one of them, and then he wakes up in a strange place and then has a series of experiences/dreams which recount history leading up to where he currently is, including a civilization of uploads, and he's currently living with the last humans around... something like that. Can anybody help me out? Online story, 20 something chapters I think... this is driving me nuts.

Comment author: Risto_Saarelma 20 August 2010 07:08:53PM 2 points [-]
Comment author: ABranco 19 August 2010 03:08:48AM 2 points [-]

The visual guide to a PhD: http://matt.might.net/articles/phd-school-in-pictures/

Nice map–territory perspective.

Comment author: Craig_Heldreth 15 August 2010 01:09:11PM *  2 points [-]

John Baez This Week's Finds in Mathematical Physics has its 300th and last entry. He is moving to wordpress and Azimuth. He states he wants to concentrate on futures, and has upcoming interviews with:

Tim Palmer on climate modeling and predictability, Thomas Fischbacher on sustainability and permaculture, and Eliezer Yudkowsky on artificial intelligence and the art of rationality. A Google search returns no matches for Fischbacher + site:lesswrong.com and no hits for Palmer +.

That link to Fischbacher that Baez gives has a presentation on cognitive distortions and public policy which I found quite good.

Comment author: NancyLebovitz 09 August 2010 04:50:23PM 2 points [-]

I've written a post for consolidating book recommendations, and the links don't have hidden urls. These are links which were cut and pasted from a comment-- the formatting worked there.

Posting (including to my drafts) mysteriously doubles the spaces between the words in one of my link texts, but not the others. I tried taking that link out in case it was making the whole thing weird, but it didn't help.

I've tried using the pop-up menu for links that's available for writing posts, but that didn't change the results.

What might be wrong with the formatting?

Comment author: gwern 09 August 2010 07:03:19AM 2 points [-]

With regard to the recent proof of P!=NP: http://predictionbook.com/predictions/1588

Comment author: PeerInfinity 08 August 2010 01:57:59AM *  2 points [-]

Scenario: A life insurance salesman, who happens to be a trusted friend of a relatively-new-but-so-far-trustworthy friend of yours, is trying to sell you a life insurance policy. He makes the surprising claim that after 20 years of selling life insurance, none of his clients have died. He seems to want you to think that buying a life insurance policy from him will somehow make you less likely to die.

How do you respond?

edit: to make this question more interesting: you also really don't want to offend any of the people involved.

Comment author: wedrifid 08 August 2010 07:40:05AM *  8 points [-]

He makes the surprising claim that after 20 years of selling life insurance, none of his clients have died.

Wow. He admitted that to you? That seems to be strong evidence that most people refuse to buy life insurance from him. In a whole 20 years he hasn't sold enough insurance that even one client has died from unavoidable misfortune!

Comment author: Eliezer_Yudkowsky 08 August 2010 06:32:51AM 8 points [-]

"No."

Life insurance salesmen are used to hearing that. If they act offended, it's a sales act. If you're reluctant to say it, you're easily pressured and you're taking advantage. You say "No". If they press you, you say, "Please don't press me further." That's all.

Comment author: SilasBarta 08 August 2010 02:18:12AM *  4 points [-]

Since his sales rate probably increased with time, that means the average time after selling a policy is ~8 years. So the typical client of his didn't die after 8 years. Making a rough estimate of the age of the client he sells to, which would probably be 30-40, it just means that the typical client has lived to at least 48 or less, which is normal, not special.

Furthermore, people who buy life insurance self-select for being more prudent in general.

So, even ignoring the causal separations you could find, what he's told you is not very special. Though it separates him from other salesmen, the highest likelihood ratio you should put on this piece of evidence would be something like 1.05 (i.e. ~19 out of 20 salesmen could say the same thing), or not very informative, so you are only justified in making a very slight move toward his hypothesis, even under the most generous assumptions.

You could get a better estimate of his atypicality by asking more about his clients, at which point you would have identified factors that can screen off the factor of him selling a policy.

(Though in my experience, life insurance salesmen aren't very bright, and a few sentences into that explanation, you'll get the, "Oh, it's one of these people" look ...)

How'd I do?

Edit: Okay, I think I have to turn in my Bayes card for this one: I just came up with a reason why the hypothesis puts a high probability on the evidence, when in reality the evidence should have a low probability of existing. So it's more likely he doesn't have his facts right.

Maybe this is a good case to check the "But but somebody would have noticed" heuristic. If one of his clients died, would he even find out? Would the insurance company tell him? Does he regularly check up on his clients?

Comment author: PeerInfinity 08 August 2010 02:41:48AM *  3 points [-]

I disagree with your analysis, but the details of why I disagree would be spoilers.

more details:

no, he's not deliberately selecting low-risk clients. He's trying to make as many sales as possible.

and he's had lots of clients. I don't know the actual numbers, but he has won awards for how many policies he has sold.

and he seems to honestly believe that there's something special about him that makes his clients not die. he's "one of those people".

and here's the first actuarial life table I found through a quick google search: http://www.ssa.gov/OACT/STATS/table4c6.html

Comment author: PeerInfinity 08 August 2010 03:10:49AM 2 points [-]

I'm going to go ahead and post the spoiler, rot13'd

Zl thrff: Ur'f ylvat. Naq ur'f cebonoyl ylvat gb uvzfrys nf jryy, va beqre sbe gur yvr gb or zber pbaivapvat. Gung vf, qryvorengryl sbetrggvat nobhg gur pyvragf jub unir qvrq.

Vs ur unf unq a pyvragf, naq vs gurve nirentr ntr vf 30... Rnpu lrne, gur cebonovyvgl bs rnpu bs gurz fheivivat gur arkg lrne vf, jryy, yrg'f ebhaq hc gb 99%. Gung zrnaf gung gur cebonovyvgl bs nyy bs gurz fheivivat vf 0.99^a. Rira vs ur unf bayl unq 100 pyvragf, gura gur cebonovyvgl bs gurz nyy fheivivat bar lrne vf 0.99^100=0.36 Vs ur unq 200 pyvragf, gura gur cebonovyvgl bs gurz nyy fheivivat bar lrne vf 0.99^200=0.13. Naq gung'f whfg sbe bar lrne. Gur sbezhyn tbrf rkcbaragvny ntnva vs lbh pbafvqre nyy 20 lrnef. Gur cebonovyvgl bs nyy 100 pyvragf fheivivat 20 lrnef vf 0.99^100^20=1.86R-9

Naq zl npghny erfcbafr vf... qba'g ohl gur yvsr vafhenapr. Ohg qba'g gryy nalbar gung lbh guvax ur'f ylvat. (hayrff lbh pbhag guvf cbfg.) Nyfb, gur sevraq ab ybatre pbhagf nf "gehfgrq", be ng yrnfg abg gehfgrq gb or engvbany. Bu, naq srry ernyyl thvygl sbe abg svaqvat n orggre fbyhgvba, naq cbfg gb YJ gb frr vs nalbar guvaxf bs n orggre vqrn. Ohg qba'g cbfg rabhtu vasbezngvba sbe nalbar gb npghnyyl guvax bs n orggre fbyhgvba. Naq vs fbzrbar qbrf guvax bs n orggre vqrn naljnl, vtaber vg vs vg'f gbb fpnel.

Comment author: NancyLebovitz 08 August 2010 09:11:04AM 2 points [-]

Furthermore, people who buy life insurance self-select for being more prudent in general.

On the other hand, there's also selection for people who aren't expecting to live as long as the average, and this pool includes prudent people.

Anyone have information on owning life insurance and longevity?

Comment author: Clippy 08 August 2010 04:00:25AM 3 points [-]

Buying life insurance can't extend a human's life.

Comment author: Larks 09 August 2010 10:27:50PM 2 points [-]

Tell him you found his pitch very interesting and persuasive, and that you'd like to buy life insurance for a 20 year period. Then, ponder for a little while; "Actually, it can't be having the contact that keeps them alive, can it? That's just a piece of paper. It must be that the sort of person who buy it are good at staying alive! And it looks like I'm one of them; this is excellent!

Then , you point out that as you're not going to die, you don't need life insurance, and say goodbye.

If you wanted to try to enlighten him, you might start by explicitly asking if he believed there was a causal link. But as the situation isn't really set up for honest truth-hunting, I wouldn't bother.

Comment author: RobinZ 08 August 2010 02:33:03AM 2 points [-]

With a degree of discombobulation, I imagine. I can't see any causal mechanism by which buying insurance would cause you to live longer, so unless the salesman knows something I wouldn't expect him to, he would seem to have acquired an unreliable belief. Given this, I would postpone buying any insurance from him in case this unreliable belief could have unfortunate further consequences* and I would reduce my expectation that the salesman might prove to be an exceptional rationalist.

* For example: given his superstition, he may have allotted inadequate cash reserves to cover future life insurance payments.

Comment author: SilasBarta 06 August 2010 11:20:58PM 2 points [-]

Goodhart sighting? Misunderstanding of causality sighting? Check out this recent economic analysis on Slate.com (emphasis added):

For much of the modern American era, inflation has been viewed as an evil demon to be exorcised, ideally before it even rears its head. This makes sense: Inflation robs people of their savings, and the many Americans who have lived through periods of double-digit inflation know how miserable it is. But sometimes a little bit of inflation is valuable. During the Great Depression, government policies deliberately tried to create inflation. Rising prices are a sign of rising output, something that would be welcome in the current slow-motion recovery.

(He then quotes an economist that says inflation would also prop up home values and prevent foreclosures.)

Did I get that right? Because inflation has traditionally been a sign of (caused by) rising output, you should directly cause inflation, in order to cause higher output. (Note: in order to complete the case for inflation, you arguably have to do the same thing again, but replacing inflation with output, and output with reduced unemployment.)

A usual, I'm not trying to start a political debate about whether inflation is good or bad, or what should be done to increase/decrease inflation. I'm interested in this particular way of arguing for pro-inflation policies, which seems to even recognize which way the causality flows, but still argue as if it runs the opposite direction.

Am I misunderstanding it?

LW Goodhart article

Comment author: Spurlock 06 August 2010 03:14:53PM 2 points [-]

Last night I introduced a couple of friends to Newcomb's Problem/Counterfactual Mugging, and we discussed it at some length. At some point, we somehow stumbled across the question "how do you picture Omega?"

Friend A pictures Omega as a large (~8 feet) humanoid with a deep voice and a wide stone block for a head.

When Friend B hears Omega, he imagines Darmani from Majora's mask (http://www.kasuto.net/image/officialart/majora_darmani.jpg)

And for my part, I've always pictured him a humanoid with paper-white skin in a red jumpsuit with a cape (the cape, I think, comes from hearing him described as "flying off" after he's confounded you).

So it seemed worth asking LW just for the amusement: how do you picture Omega?

Comment author: cousin_it 06 August 2010 03:30:47PM *  6 points [-]

I've always pictured Omega like this: suddenly I'm pulled from our world and appear in a sterile white room that contains two boxes. At the same moment I somehow know the problem formulation. I open one box, take the million, and return to the world.

Comment author: WrongBot 06 August 2010 03:18:13PM 2 points [-]

I've always thought of Omega as looking something like a hydralisk--biological and alien, almost a scaled-down Lovecraftian horror.

Comment author: sixes_and_sevens 05 November 2010 12:30:40AM 1 point [-]

(Necro-thread)

I can't explain why, but I've always imagined Omega to be a big hovering red sphere with a cartoonish face, and black beholder-like eyestalks coming off him from all sides.

He may have been influenced by the Flying Spaghetti Monster.

Comment author: NancyLebovitz 06 August 2010 03:06:07PM 2 points [-]

AI development in the real world?

As a result, a lot of programmers at HFT firms spend most of their time trying to keep the software from running away. They create elaborate safeguard systems to form a walled garden around the traders but, exactly like a human trader, the programs know that they make money by being novel, doing things that other traders haven't thought of. These gatekeeper programs are therefore under constant, hectic development as new algorithms are rolled out. The development pace necessitates that they implement only the most important safeguards, which means that certain types of algorithmic behavior can easily pass through. As has been pointed out by others, these were "quotes" not "trades", and they were far away from the inside price - therefore not something the risk management software would be necessarily be looking for. -- comment from gameDevNYC

I can't evaluate whether what he's saying is plausible enough for science fiction-- it's certainly that-- or likely to be true.

Comment author: [deleted] 05 August 2010 01:09:26AM 2 points [-]

"An Alien God" was recently re-posted on the stardestroyer.net "Science Logic and Morality" forum. You may find the resulting discussion interesting.

http://bbs.stardestroyer.net/viewtopic.php?f=5&t=144148&start=0

Comment author: Torben 03 August 2010 06:51:08PM 2 points [-]

In an argument with a philosopher, I used Bayesian updating as an argument. Guy's used to debating theists and was worried it wasn't bulletproof. Somewhat akin to how, say, the sum of angles of a triangle only equals 180 in Euclidian geometry.

My question: what are the fundamental assumptions of Bayes theorem in particular and probability theory in general? Are any of these assumptions immediate candidates for worry?

Comment author: cousin_it 03 August 2010 07:48:27PM *  4 points [-]

If you're talking about math, Bayes' theorem is true and that's the end of that. If you're talking about degrees of belief that real people hold - especially if you want to convince your opponent to update in a specific direction because Bayes' theorem says so - I'd advise to use another strategy. Going meta like "you must be persuaded by these arguments because blah blah blah" gives you less bang per buck than upgrading the arguments.

Comment author: jimrandomh 03 August 2010 07:28:10PM 4 points [-]

Jaynes' book PT:LoS has a good chapter on this, where he derives Bayes' theorem from simple assumptions (use of numbers to represent plausibility, consistency between paths that compute the same value, continuity, and agreement with common sense qualitative reasoning). The assumptions are sound.

Note that the validity of Bayes' theorem is a separate question from the validity of any particular set of prior probabilities, which is on much shakier ground.

Comment author: satt 03 August 2010 09:20:27PM 3 points [-]

Bayes's theorem follows almost immediately from the ordinary definition of conditional probability, which I think is itself so reassuringly intuitive that no one who accepts the use of probabilities would worry about it (except perhaps in the corner case where the denominator's zero).

Comment author: xamdam 02 August 2010 07:11:07PM 2 points [-]

Wei Dai has cast some doubts on the AI-based approach

Assuming that it is unlikely we will obtain fully satisfactory answers to all of the questions before the Singularity occurs, does it really make sense to pursue an AI-based approach?

I am curious if he has "another approach" he wrote about; I am not brushed up on sl4/ob/lw prehistory.

Personally I have some interest in increasing intelligence capability on individual level via "tools of thought" kind of approach, BCI in the limit. There is not much discussion of it here.

Comment author: Wei_Dai 03 August 2010 01:53:16AM 4 points [-]

No, I haven't written in any detail about any other approach. I think when I wrote that post I was mainly worried that Eliezer/SIAI wasn't thinking enough about what other approaches might be more likely to succeed than FAI. After my visit to SIAI a few months ago, I became much less worried because I saw evidence that plenty of SIAI people were thinking seriously about this question.

Comment author: gwern 02 August 2010 11:11:39AM 2 points [-]

From the Long Now department: "He Took a Polaroid Every Day, Until the Day He Died"

My comment on the Hacker News page describes my little webcam script to use with cron and (again) links to my Prediction Book page.

Comment author: humpolec 01 August 2010 07:04:50PM 2 points [-]

If you have many different (and conflicting, in that they demand undivided attention) interests: if it was possible, would copying yourself in order to pursue them more efficiently satisfy you?

One copy gets to learn drawing, another one immerses itself in mathematics & physics, etc. In time, they can grow very different.

(Is this scenario much different to you than simply having children?)

Comment author: [deleted] 01 August 2010 09:10:49PM 6 points [-]

I wouldn't have problems copying myself as long as I could merge the copies afterwards. However, it might not be possible to have a merge operation for human level systems that both preserves information and preserves sanity. E.g. if one copy started studying philosophy and radically changed its world views from the original, how do you merge this copy back into the original without losing information?

Comment author: JenniferRM 02 August 2010 04:03:02AM 3 points [-]

David Brin's novel Kiln People has this "merging back" idea, with cheap copies, using clay for a lot of the material and running on a hydrogen based metabolism so they are very short lived (hours to weeks, depending on $$) and have to merge back relatively soon in order to keep continuity of consciousness through their long lived original. Lots of fascinating practical economic, ethical, social, military, and political details are explored while a noir detective story happens in the foreground.

I recommend it :-)

Comment author: humpolec 01 August 2010 09:32:23PM 2 points [-]

I agree, I don't think merge is possible in this scenario. I still see some gains, though (especially when communication is possible):

  • I (the copy that does X) am happy because I do what I wanted.
  • I (the other copies) am happy because I partly identify with the other copy (as I would be proud of my child/student?)
  • I (all copies) get results I wanted (research, creative, or even personal insights if the first copy is able to communicate them)
Comment author: [deleted] 01 August 2010 10:45:17PM 2 points [-]

If you don't have the ability to merge, would the copies get equal rights as the original? Or would the original control all the resources and the copies get treated as second class citizens? If the copies were second class citizens, I would probably not fork because this would result in slavery.

If the copies do get equal rights, how do you plan to allocate resources that you had before forking such as wealth and friends? If I split the wealth down the middle, I would probably be OK with the lack of merging. However, I'm not sure how I would divide up social relationships between the copy and the original. If both the original and the copy had to reduce their financial and social capital by half, this might have a net negative utility.

If the goal is to just learn a new skill such as drawing, a more efficient solution might involving uploading yourself without copying yourself and then running the upload faster than realtime. I.e. the upload thinks it has spent a year learning a new skill but only a day has gone by in the real world. However, this trick won't work if the goal involves interacting with others unless they are also willing to run faster than realtime.

Comment author: Peter_de_Blanc 01 August 2010 07:22:51PM 4 points [-]

That sounds (to me) better than having children, but not as good as living longer.

Comment author: KrisC 01 August 2010 09:44:26PM 2 points [-]

Sounds wonderful. Divide and conquer.

As this sounds like a computer assisted scenario, I would like the ability to append memories while sleeping. Wake up and have access to the memories of the copy. This would not necessarily include full proficiency as I suspect that muscle memory may not get copied.

Comment author: kmeme 01 August 2010 06:28:10PM *  2 points [-]

I would like feedback on my recent blog post:

http://www.kmeme.com/2010/07/singularity-is-always-steep.html

It's simplistic for this crowd, but something that bothered me for a while. When I first saw Kurzweil speak in person (GDC 2008) he of course showed both linear and log scale plots. But I always thought the log scale plots were just a convenient way to fit more on the screen, that the "real" behavior was more like the linear scale plot, building to a dramatic steep slope in the coming years.

Instead I now believe in many cases the log plot is closer to "the real thing" or at least how we perceive that thing. For example in the post I talk about computational capacity. I believe the exponential increase is capacity translates into a perceived linear increase in utility. A computer twice as fast is only incrementally more useful, in terms of what applications can be run. This holds true today and will hold true in 2040 or any other year.

Therefore computational utility is incrementally increasing today and will be incrementally increasing in 2040 or any future date. It's not building to some dramatic peak.

None of this says anything against the possibility of a Singularity. If you pass the threshold where machine intelligence is possible, you pass it, whatever the perceived rate of progress at the time.

Comment author: timtyler 03 August 2010 06:13:16PM *  2 points [-]

My essay on the topic:

http://alife.co.uk/essays/the_singularity_is_nonsense/

See also:

"The Singularity" by Lyle Burkhead - see the section "Exponential functions don't have singularities!"

It's not exponential, it's sigmoidal

The Singularity Myth

Singularity Skepticism: Exposing Exponential Errors

IMO, those interested in computational limits should discuss per-kg figures.

The metric Moore's law uses is not much use really - since it would be relatively easy to make large asynchronous ICs with lots of faults - which would make a complete mess of the "law".

Comment author: ABranco 05 August 2010 04:26:00AM 3 points [-]

I would love to see an ongoing big wiki-style FAQ addressing all possible received critics of the singularity — of course, refuting the refutable ones, accepting the sensible.

A version with steroids of what this one did with Atheism.

Team would be: - one guy inviting and sorting out criticism and updating the website. - an ad hoc team of responders.

It seems criticism and answers have been scattered all over. There seems to be no one-stop source for that.

Comment author: steven0461 05 August 2010 04:51:36AM 2 points [-]

Here's a pretty extensive FAQ, though I have reservations about a lot of the answers.

Comment author: timtyler 05 August 2010 06:23:08AM *  3 points [-]

The authors are - or were - SI fellows, though - and the SI is a major Singularity promoter. Is that really a sensible place to go for Singularity criticism?

http://en.wikipedia.org/wiki/Technological_singularity#Criticism lists some of the objections.