Open Thread: April 2010

4 Post author: Unnamed 01 April 2010 03:21PM

An Open Thread: a place for things foolishly April, and other assorted discussions.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads.  Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.

Comments (524)

Comment author: Zubon 04 April 2010 05:31:09PM 17 points [-]

Example of teachers not getting past Guessing the Teacher's Password: debating teachers on the value of pi. Via Gelman.

Comment author: Eliezer_Yudkowsky 04 April 2010 06:52:28PM 3 points [-]

AAAAAIIIIIIIIEEEEEEEE

BOOM

Comment author: Alicorn 04 April 2010 06:59:49PM 6 points [-]

Clearly, your math teacher biting powers are called for.

Comment author: CronoDAS 04 April 2010 09:24:36PM 2 points [-]

In first grade, I threw a crayon at the principal. Can I help? ;)

Comment author: JGWeissman 04 April 2010 07:08:29PM 2 points [-]

Let's not get too hasty. They still might know logarithms. ;)

Comment author: Tyrrell_McAllister 04 April 2010 06:57:24PM *  1 point [-]

It would have been even more frustrating had the protagonist not also been guessing the teacher's password. It seemed that the protagonist just had a better memory of what more authoritative teachers had said.

The protagonist was closer to being able to derive π himself, but that played no part in his argument.

Comment author: JGWeissman 04 April 2010 07:06:09PM 5 points [-]

There's no evidence that the protagonist didn't just have a better memory of what more authoritative teachers had said.

The protagonist knew that pi is defined as the ratio of a circle's circumference and diameter, and the numbers that people have memorized came from calculating that ratio.

The protagonist knew that pi is irrational, that irrational means it cannot be expressed as a ratio of integers, and that 7 and 22 are integers, and that therefore pi cannot be exactly expressed as 22/7.

The protagonist was willing to entertain the theory that 22/7 is a good enough approximation of pi to 5 digits, but updated when he saw that the result came out wrong.

Comment author: Emile 08 April 2010 03:38:44PM 1 point [-]

Quite depressing. Makes me even less likely to have my kids educated in the states. I wonder how bad Europe is on that count? Is it really better here? It can be hard to tell from inside; correcting for the fact that most info I get is biased one way or the other leaves me with pretty wide confidence intervals.

Comment author: hegemonicon 01 April 2010 05:46:36PM *  15 points [-]

After the top level post about it, I bought a bottle of Melatonin to try. I've been taking it for 3 weeks. Here are my results.

Background: Weekdays I typically sleep for ~6 hours, with two .5 hour naps in the middle of the day (once at lunch and once when I get home from work). Weekends I sleep till I feel like getting up, so I usually get around 10-11 hours.

I started with a 3mg pill, then switched to a ~1.5 mg pill (I cut them in half) after being extremely tired the next day. I take it about an hour before I go to sleep.

The first thing I noticed was that it makes falling asleep much easier. It's always been a struggle for me to fall asleep (usually I have to lay there for an hour or more), but now I'm almost always out cold within 20 minutes.

I've also noticed that I feel much less tired during the day, which was my impetus for trying it in the first place. However, I'm not sure how much of this is a result of needing less sleep, and how much is a result of me falling asleep faster and thus sleeping for longer. But it's definitely noticeable.

Getting up in the morning is not noticeably easier.

No evidence that it's habit forming. I'm currently not taking it on weekends (I found myself needing a nap even after getting 10-11 hours of sleep), and I don't notice any additional difficulty going to bed beyond what I would normally have.

I seemed to have more intense dreams the first several days taking it, but they seem to have gone back to normal (or I've gotten used to them/don't remember them).

Overall it seems to work (for me at least) exactly as gwern described, and I'd happily recommend it to anyone else who has difficulty sleeping.

Comment author: alasarod 01 April 2010 11:54:18PM 4 points [-]

I took it for at least 8 weeks, primarily on weekdays. I found after a while that I was waking up at 4am, sometimes unable to get back to sleep. I had some night sweats too. May not be a normal response, but I found that if I take it in moderation it does not have these effects.

Comment author: [deleted] 02 April 2010 02:57:29PM 3 points [-]

I wonder if you need to get back to sleep after waking up at 4 AM.

Comment author: Matt_Simpson 02 April 2010 06:37:36PM *  2 points [-]

I've been trying it as well for ~2 months (with some gaps).

Normally I have trouble falling asleep, but have no problem staying asleep, so the main reason I take melatonin is to help fall asleep.

Currently, I take 2 5mg pills. Taking 1 doesn't have a very noticeable effect on my ability to fall asleep, but 2 seems to do the trick. However, I have to be sure that I give myself 7-8 hours for sleep, otherwise getting up is more difficult and I may be very groggy the next day. This can be problematic because sometimes I just have to stay up slightly later doing homework and because I can't take the melatonin I end up barely getting any sleep at all.

I haven't noticed any habit forming effects, though some slight effects might be welcome if it helped me to remember to take the supplement every night ;)

edit: its actually two 3mg pills, not 5mg. I googled the brand walmart carries since that's where I bought mine from, and it said 5mg on the bottle. Now that I'm home, I see that my bottle is actually 3mg.

Comment author: Liron 02 April 2010 12:12:21AM 2 points [-]

I also tried it out after reading that LW post. At first it was fantastic at getting me to fall asleep within 30 minutes (I'm a good sleeper, it would only take me 30 minutes because I would be going to sleep not tired in order to wake up earlier) and I would wake up feeling alert.

Now unfortunately I wake up feeling the same and basically have stopped noticing its effects. The only time I take it is when I want to go to sleep and I'm not tired.

Also: During the initial 1-2 week period of effectiveness, I had intense and vivid and stressful dreams (or maybe I simply remembered my normal dreams better).

Comment author: Jonathan_Graehl 01 April 2010 07:29:46PM 2 points [-]

The easily available product for me is a blend of 3mg melatonin/25mg theanine. 25mg is a heavy tea-drinker's dose, and I see no reason to consume theanine at all (even dividing the pills in half), so I haven't bought any.

Does anyone have some evidence recommending for/against taking theanine? In my view, the health benefits of tea drinking are negligible, and theanine is just one of many compounds in tea.

Comment author: JenniferRM 02 April 2010 05:58:43AM *  5 points [-]

Theanine may be "one of many compounds found in tea" but, on the recommendation of an acquaintance I tried taking theanine itself as an experiment once (from memory maybe 100mg?). First I read up on it a little and it sounded reasonably safe and possibly beneficial and I drank green tea anyway so it seemed "cautiously acceptable" to see what it was like in isolation. Basically I was wondering if it helped me relax, focus, and/or learn better.

The result was a very dramatic manic high that left me incapable intellectually directed mental focus (as opposed to focus on whatever crazy thing popped into my head and flittered away 30 minutes later) for something like 35 hours. Also, I couldn't sleep during this period.

In retrospect I found it to be somewhat scary and it re-confirmed my general impression of the bulk of "natural" supplements. Specifically, it confirmed my working theory that the lack of study and regulation of supplements leads to a market full of many options that range from worthless placebo to dangerously dramatic, with tragically few things in the happy middle ground of safe efficacy.

Melatonin is one of the few supplements that I don't put in this category, however in that case I use less than "the standard" 3mg dose. When I notice my sleep cycle drifting unacceptably I will spend a night or two taking 1.5mg of melatonin (using a pill cutter to chop 3mg pills in half) to help me fall asleep and then go back to autopilot. The basis for this regime is that my mother worked in a hospital setting and 1.5mg was what various doctors recommended/authorized for patients to help them sleep.

There was a melatonin fad in the late 1990's(?) where older people were taking melatonin as a "youth pill" because endogenous production declines with age. I know of no good studies supporting that use, but around that time was when the results about sleep came out, showing melatonin to be effective even for "jet lag" as a way to reset one's internal clock swiftly and safely.

Comment author: Kevin 02 April 2010 06:15:44AM *  2 points [-]

That reaction sounds rare. Do you think 20 cups of tea would have triggered a similar reaction in you?

There is a huge variation based on dosage for all things you can ingest: food, drug, supplement, and "other". Check out the horrors of eating a whole bottle of nutmeg. http://www.erowid.org/experiences/subs/exp_Nutmeg.shtml

Comment author: JamesAndrix 02 April 2010 05:28:19AM 9 points [-]

http://www4.gsb.columbia.edu/ideasatwork/feature/735403/Powerful+Lies

The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars — the leaders —resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars — the subordinates — showed the usual signs of stress and slower reaction times. “Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal,” Carney explains.

Comment author: Baughn 02 April 2010 07:37:15PM 24 points [-]

It doesn't seem like it's ever going to be mentioned otherwise, so I thought I should tell you this:

Lesswrong is writing a story, called "Harry Potter and the Methods of Rationality". It's just about what you'd expect; absolutely full of ideas from LW.com. I know it's not the usual fare for this site, but I'm sure a lot of you have enjoyed Eliezer's fiction as fiction; you'll probably like this as well.

Who knows, maybe the author will even decide to decloak and tell us who to thank?

Comment author: JGWeissman 02 April 2010 08:52:23PM *  9 points [-]

My fellow Earthicans, as I discuss in my book Earth In The Balance and the much more popular Harry Potter And The Balance Of Earth, we need to defend our planet against pollution. As well as dark wizards.

-- Al Gore on Futurama

Comment author: Unnamed 02 April 2010 11:49:56PM *  3 points [-]

Harry Potter as a boy genius smart-aleck aspiring rationalist works surprisingly well. And the idea of extending the pull of rationalism a bit beyond its standard sci-fi hunting grounds using Harry Potter fanfiction is brilliant.

Comment author: ata 06 April 2010 05:36:59AM *  6 points [-]

Magnificent. (I've sent it to some of my friends, most of whom are thoroughly enjoying it too; many of them are into Harry Potter but not advanced rationalism, so maybe it will turn some of them on to the MAGIC OF RATIONALITY!)

Edit: Sequel idea which probably only works as a title: "Harry Potter and the Prisoner's Dilemma of Azkaban". Ohoho!

Edit 2: Also on my wishlist: Potter-Evans-Verres Puppet Pals.

Comment author: gwern 07 April 2010 09:25:41PM *  1 point [-]

"Harry Potter and the Prisoner's Dilemma of Azkaban"

I could see that working as a prison mechanism, actually. Azkaban would be an ironic prison, akin to Dante's contrapasso. (The book would be an extended treatise on decision theory.)

The reward for both inmates cooperating is escape from Azkaban, the punishment really horrific torture, and the inmates are trapped as long as they are conniving cheating greedy bastards - but no longer.

(The prison could be like a maze, maybe, with all sorts of different cooperation problems - magic means never having to apologize for Omega.)

Comment author: pengvado 07 April 2010 09:35:47PM 1 point [-]

So if one prisoner cooperates and the other defects, then the defector goes free and the cooperator doesn't? That doesn't sound very effective for keeping conniving cheating greedy bastards in prison.

Comment author: Alicorn 02 April 2010 08:20:09PM *  6 points [-]

I'm 98% confident it's Eliezer. He's been taunting us about a piece of fanfiction under a different name on fanfiction.net for some time. I guess this means I don't have to bribe him with mashed potatoes to get the URL after all.

Edit: Apparently, instead, I will have to bribe him with mashed potatoes for spoilers. Goddamn WIPs.

Comment author: Eliezer_Yudkowsky 02 April 2010 10:55:56PM 13 points [-]

Yeah, I don't think I can plausibly deny responsibility for this one.

Googling either (rationality + fanfiction) or even (rational + fanfiction) gets you there as the first hit, just so ya know...

Also, clicking on the Sitemeter counter and looking at "referrals" would probably have shown you a clickthrough from a profile called "LessWrong" on fanfiction.net.

Want to know the rest of the plot? Just guess what the last sentence of the current version is about before I post the next part on April 3rd. Feel free to post guesses here rather than on FF.net, since a flood of LW.com reviewers would probably sound rather strange to them.

Comment author: JGWeissman 03 April 2010 06:02:17AM 17 points [-]

"Oh, dear. This has never happened before..."

Voldemort's Killing Curse had an epiphenomenal effect: Harry is a p-zombie. ;)

Comment author: Unnamed 04 April 2010 03:38:20AM 8 points [-]

I don't like where this is headed - Harry isn't provably friendly and they're setting him loose in the wizarding world!

Comment author: Mass_Driver 04 April 2010 06:19:45AM 7 points [-]

Also, there is a sharply limited supply of people who speak Japanese, Hebrew, English, math, rationality, and fiction all at once. If it wasn't you, it was someone making a concerted effort to impersonate you.

Comment author: CronoDAS 02 April 2010 11:52:23PM 5 points [-]

Do I have to guess right? ;)

Comment author: Kevin 03 April 2010 03:29:39AM *  4 points [-]

It gets a strong vote of approval from my girlfriend. She made it about halfway through Three Worlds Collide without finishing, for comparison. We'll see if I can get my parents to read this one...

Edit: And I think this is great. Looking forward to when Harry crosses over to the universe of the Ultimate Meta Mega Crossover.

Comment author: Kevin 03 April 2010 09:16:10PM 3 points [-]

Let's make that a Prediction. Harry becomes the ultimate Dark Lord by destroying the universe and escaping to the Metametaverse of the Ultimate Meta Mega Crossover.

Comment author: Jack 15 April 2010 05:33:50AM 3 points [-]

This Harry is so much like Ender Wiggin.

Comment author: Cyan 15 April 2010 06:10:20AM 2 points [-]

Really? I picture him looking like a younger version of this.

Comment author: Jack 15 April 2010 06:42:40AM 9 points [-]

This Harry and Ender are both terrified of becoming monsters. Both have a killer instinct. Both are much smarter than most of their peers. Ender's two sides are reflected in the monstrous Peter and the loving Valentine. The two sides of Potter-Evans-Verres are reflected in Draco and Hermione. The environments are of course very similar: both are in very abnormal boarding schools teaching them things regular kids don't learn.

Oh, and now the Defense Against the Dark Arts prof is going to start forming "armies" for practicing what is now called "Battle Magic" (like the Battle Room!).

And the last chapter's disclaimer?

The enemy's gate is Rowling.

If the parallels aren't intentional I'm going insane.

Comment author: NancyLebovitz 15 April 2010 02:48:57PM 1 point [-]

And going back a few chapters, I'm betting that what Harry saw as wrong with himself is hair-trigger rage.

Comment author: Alicorn 02 April 2010 11:14:09PM 3 points [-]

There is a reason I didn't look for it. It isn't done. Having found it anyway via link above, of course I read it because I have almost no self-control, but I didn't look for it!

Are you sure you wouldn't rather have the mashed potatoes? There's a sack of potatoes in the pantry. I could mash them. There's also a cheesecake in the fridge... I was thinking of making soup... should I continue to list food? Is this getting anywhere?

Comment author: ShardPhoenix 04 April 2010 08:23:05AM *  2 points [-]

This is a lot of fun so far, though I think McGonnagal was in some ways more in the right than Harry in chapter 6. Also, I kind of feel like Draco's behavior here is a bit unfair to the wizarding world as portrayed in the canon - the wizarding world is clearly not at all medieval in many ways (especially in the treatment of women where the behavior we actually see is essentially modern), so I'm not sure why it should necessarily be so in that way. Regardless of my nitpicking it's a brilliant fanfic and it's nice to see muggle-world ideas enter the wizarding world (which always seemed like it should have happened already).

Comment author: CronoDAS 03 April 2010 02:14:46AM 2 points [-]

You also have the approval of several Tropers, only one of which is me.

Comment author: Cyan 03 April 2010 02:33:53AM 4 points [-]

Holy fucking shit that was awesome.

Comment author: Liron 05 April 2010 01:36:34AM 1 point [-]

I normally read within {nonfiction} U {authors' other works} but I had such a blast with Methods of Rationality that I might try some more fiction.

Comment author: MBlume 05 April 2010 03:58:58AM 5 points [-]

This story reminded me distinctly of Harry Potter and the Nightmares of Futures Past -- you might enjoy that one. Harry works until he's 30 to kill Voldemort, and by the time he succeeds, everyone he loves is dead. He comes up with a time travel spell that breaks if the thing being transported has any mass, so he kills himself, and lets his soul do the travelling. 30-year-old Harry's soul merges with 11-year-old Harry, and a very brilliant, very prepared, very powerful, and deeply disturbed young wizard enters Hogwarts.

Comment author: Kevin 05 April 2010 04:27:58AM *  3 points [-]

I like all of Eliezer's fiction... if you want more like this, see the pseudo-sequel, http://lesswrong.com/lw/18g/the_finale_of_the_ultimate_meta_mega_crossover/ It is too insane of a story to recommend to most people, but assuming you've read Eliezer's non-fiction, you can jump right in.

Otherwise, just about all of Eliezer's fiction is worth reading, Three World's Collide is his best work of fiction.

Comment author: Baughn 02 April 2010 08:36:51PM 3 points [-]

No, no, it's not Eliezer.

It's an alternate personality, which acts exactly the same and shares memories, that merely believes it's Eliezer.

Comment author: Kevin 03 April 2010 02:43:32AM *  3 points [-]

Sounds like an Eliezer to me.

Comment author: Larks 03 April 2010 01:28:37PM 3 points [-]

like an Eliezer, yes.

Comment author: Matt_Simpson 04 April 2010 06:34:40AM 2 points [-]

Edit: Apparently, instead, I will have to bribe him with mashed potatoes for spoilers. Goddamn WIPs.

I know, right? This would have been a wonderful story for me to read 10 years ago or so, and not just because now I'm having difficulty explaining to my girlfriend why I spent friday night reading a Harry Potter fanfic instead of calling her...

Comment author: Document 18 April 2010 06:45:46AM 1 point [-]

For the record, it's currently the first Google autocomplete result for "harry potter and the me", with apparently multiple pages of forum posts and such about it.

Comment author: LucasSloan 02 April 2010 10:43:47PM *  1 point [-]

Fb, sebz gur cbvag bs ivrj bs na Nygreangr-Uvfgbel, V nffhzr gur CBQ vf Yvyyl tvivat va naq svkvat Crghavn'f jrvtug ceboyrz. Gung jbhyq graq gb vzcebir Crghavn'f ivrj bs ure zntvpny eryngvirf, naq V nffhzr gur ohggresyvrf nera'g rabhtu gb fnir Wnzrf naq Yvyyl sebz Ibyqrzbeg. Tvira gur infgyl vapernfrq vagryyvtrapr bs Uneel, V nffhzr ur vf abg trargvpnyyl gur fnzr puvyq jr fnj va gur obbxf, nygubhtu vzcebirq puvyqubbq ahgevgvba pbhyq nyfb or n snpgbe.

Comment author: Vladimir_Nesov 02 April 2010 09:02:18PM 1 point [-]

The probability of magic should make any effort on testing the hypothesis unjustified. Testing theories no matter how improbable is generally incorrect dogma. (One should distinguish improbable from silly though.)

Comment author: Eliezer_Yudkowsky 04 April 2010 01:31:13AM 10 points [-]

I think you underestimate the real-world value of Just Testing It. If I got a mysterious letter in the mail and Mom told me I was a wizard and there was a simple way to test it, I'd test it. Of course I know even better than rationalist!Harry all the reasons that can't possibly be how the ontologically lowest level of reality works, but if it's cheap to run the test, why not just say "Screw it" and test it anyway?

Harry's decision to try going out back and calling for an owl is completely defensible. You just never have to apologize for doing a quick, cheap experimental test, pretty much ever, but especially when people have started arguing about it and emotions are running high. Start flipping a coin to test if you have psychic powers, snap your fingers to see if you can make a banana, whatever. Just be ready to accept the result.

Comment author: Vladimir_Nesov 04 April 2010 08:13:21AM *  2 points [-]

You just never have to apologize for doing a quick, cheap experimental test, pretty much ever

This (injunction?) is equivalent to ascribing much higher probability to the hypothesis (magic) than it deserves. It might be a good injunction, but we should realize that at the same time, it asserts inability of people to correctly judge impossibility of such hypotheses. That is, this rule suggests that probability of some hypothesis that managed to make it in your conscious thought isn't (shouldn't be believed to be) 10^-[gazillion], even if you believe it is 10^-[gazillion].

Comment author: bogdanb 09 April 2010 03:29:06PM *  2 points [-]

I guess it depends a bit on how you came to consider the proposition to be tested, but I’m not sure how to formalize it.

I wouldn’t waste a moment’s attention in general to some random person proposing anything like this. But if someone like my mother or father, or a few of my close friends, suddenly came with a story like this (which, mark you, is quite different from the usual silliness), I would spend a couple of minutes doing a test before calling a psychiatrist. (Though I’d check the calendar first, in case it’s April 1st.)

Especially if I were about that age. I was nowhere near as bright and well-read rationalist!Harry at that age (nor am I now). I read a lot though, and I had a pretty clear idea of the distinction between fact and fiction, but I remember I just didn’t have enough practical experience to classify new things as likely true or false at a glance.

I remember at one time (between 8 and 11 years old) I was pondering the feasibility of traveling to Florida (I grew up in Eastern Europe) to check if Jules Verne’s “From the Earth to the Moon” was real or not, by asking the locals and looking for remains of the big gun. It wasn’t an easy test, so I concluded it wasn’t worth it. However, I also remember I did check if I had psychic powers by trying to guess cards and the like; that took less than two minutes.

Comment author: RobinZ 04 April 2010 01:36:00PM 2 points [-]

The probability that you have no grasp on the situation is high enough to justify an easy, simple, harmless test.

And I'd appreciate it if spoilers for the story were ROT13'd or something - I haven't read it.

Comment author: Kevin 05 April 2010 02:07:19AM 3 points [-]

You mean the plot point that Harry Potter tested the Magic hypothesis? I don't think most plot points in the introductions of stories really count as spoilers.

Comment author: CronoDAS 05 April 2010 02:15:30AM 1 point [-]

Yeah, that's not a spoiler any more than "Obi-Wan Kenobi is a Jedi" is a spoiler.

Comment author: DonGeddis 05 April 2010 08:14:43PM 9 points [-]

A "Jedi"? Obi-Wan Kenobi?

I wonder if you mean old Ben Kenobi. I don't know anyone named Obi-Wan, but old Ben lives out beyond the dune sea. He's kind of a strange old hermit.

Comment author: Baughn 02 April 2010 09:11:33PM 2 points [-]

It was strongly implied that some element of Harry's mind had skewed that prior dramatically. Perhaps his horcrux, perhaps infant memories, but either way it wasn't as you'd expect. Even for an eleven-year-old.

Comment author: Alicorn 02 April 2010 09:08:27PM *  0 points [-]

You have not taken into account that testing magical hypotheses may be categorized as "play" and pay its rent on time and effort accordingly.

Comment author: Vladimir_Nesov 02 April 2010 09:36:15PM 2 points [-]

Then this activity shouldn't be rationalized as being the right decision specifically for the reasons associated with the topic of rationality. For example, the father dismissing the suggestion to test the hypothesis is correct, given that the mere activity of testing it doesn't present him with valuable experience.

You've just taken the conclusion presented in the story, and wrote above it a clever explanation that contradicts the spirit of the story.

Comment author: Matt_Simpson 03 April 2010 05:58:38AM 1 point [-]

One of the goals was to get his parents to stop fighting over whether or not magic was real.

Comment author: Vladimir_Nesov 03 April 2010 09:38:54AM *  1 point [-]

How would it work? As expected outcome is that no magic is real, we'd need to convince the believer (mother) to disbelieve. An experiment is usually an ineffective means to that end. Rather, we'd need to mend her epistemology.

Comment author: Matt_Simpson 03 April 2010 09:02:52PM 1 point [-]

Well, Harry did spend some time making sure that this experiment would convince either of his parents if it went the appropriate way, though he had his misgivings. As a child who isn't respected by his parents, what better options does he have to stop the fight? (serious question)

Comment author: PeerInfinity 01 April 2010 05:31:17PM *  7 points [-]

I recently found something that may be of interest to LW readers:

This post at the Lifeboat Foundation blog announces two tools for testing your "Risk Intelligence":

The Risk Intelligence Game, which consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. Then it calculates your risk intelligence quotient (RQ) on the basis of your estimates.

The Prediction Game, which provides you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

Comment author: Will_Newsome 01 April 2010 09:56:33PM 3 points [-]

An annoying thing about the RQ test (rot13'd):

Jura V gbbx gur ED grfg gurer jnf n flfgrzngvp ovnf gbjneqf jung jbhyq pbzzbayl or pnyyrq vagrerfgvat snpgf orvat zber cebonoyr naq zhaqnar/obevat snpgf orvat yrff cebonoyr. fgrira0461 nyfb abgvprq guvf. Guvf jnf nobhg 1 zbagu ntb. ebg13'q fb nf abg gb shegure ovnf crbcyrf' erfhygf.

Comment author: RichardKennaway 07 April 2010 07:39:04PM *  6 points [-]

A couple of articles on the benefits of believing in free will:

Vohs and Schooler, "The Value of Believing in Free Will"

Baumeister et al., "Prosocial Benefits of Feeling Free"

The gist of both is that groups of people experimentally exposed to statements in favour of either free will or determinism[1] acted, on average, more ethically after the free will statements than the determinism statements.

References from a Sci. Am. article.

[1] Cough.

ETA: This is also relevant.

Comment author: Jack 07 April 2010 07:46:51PM *  3 points [-]

Cool. Since a handful of studies suggest a narrow majority believe moral responsibility and determinism to be incompatible this shouldn't actually be that surprising. I want to know how people act after being exposed to statements in favor of compatibilism.

Comment author: Wei_Dai 06 April 2010 11:16:31AM 6 points [-]

I've written a reply to Bayesian Flame, one of cousin_it's posts from last year. It's titled Frequentist Magic vs. Bayesian Magic. I'd appreciate some review and comments before I post it here. Mainly I'm concerned about whether I've correctly captured the spirit of frequentism, and whether I've treated it fairly.

BTW, I wish there is a "public drafts" feature on LessWrong, where I can make a draft accessible to others by URL, but not show up in recent posts, so I don't have to post a draft elsewhere to get feedback before I officially publish it.

Comment author: Vladimir_Nesov 06 April 2010 11:32:08AM *  4 points [-]

Why does the universe that we live in look like a giant computer? What about uncomputable physics?

Consider "syntactic preference" as an order on agent's strategies (externally observable possible behaviors, but in mathematical sense, independently on what we can actually arrange to observe), where the agent is software running on an ordinary computer. This is "ontological boxing", a way of abstracting away any unknown physics. Then, this syntactic order can be given interpretation, as in logic/model theory, for example by placing the "agent program" in environment of all possible "world programs", and restating the order on possible agent's strategies in terms of possible outcomes for the world programs (as an order on sets of outcomes for all world programs), depending on the agent.

This way, we first factor out the real world from the problem, leaving only the syntactic backbone of preference, and then reintroduce a controllable version of the world, in a form of any convenient mathematical structure, an interpretation of syntactic preference. The question of whether the model world is "actually the real world", and whether it reflects all possible features of the real world, is sidestepped.

Comment author: Wei_Dai 06 April 2010 12:59:35PM *  2 points [-]

Thanks (and upvoted) for this explanation of your current approach. I think it's definitely worth exploring, but I currently see at least two major problems.

The first is that my preferences seem to have a logical dependency on the ultimate nature of reality. For example, I currently think reality is just "all possible mathematical structures", but I don't know what my preferences are until I resolve what "all possible mathematical structures" means exactly. What would happen if you tried to use your idea to extract my preferences before I resolve that question?

The second is that I don't see how you plan to differentiate within "syntactic preference", those that are true preferences, and those that are caused by computational limitations and/or hardware/software errors. Internally, the agent is computing the optimal strategy (as best as it can) from a preference that's stated in terms of "the real world" and maybe also in terms of subjective anticipation. If we could somehow translate those preferences directly into preferences on mathematical structures, we would be able to bypass those computational limitations and errors without having to single them out.

Comment author: Vladimir_Nesov 06 April 2010 03:35:50PM *  4 points [-]

The first is that my preferences seem to have a logical dependency on the ultimate nature of reality.

An important principle of FAI design to remember here is "be lazy!". For any problem that people would want to solve, where possible, FAI design should redirect that problem to FAI, instead of actually solving it in order to construct a FAI.

Here, you, as a human, may be interested in "nature of reality", but this is not a problem to be solved before the construction of FAI. Instead, the FAI should pursue this problem in the same sense you would.

Syntactic preference is meant to capture this sameness of pursuits, without understanding of what these pursuits are about. Instead of wanting to do the same thing with the world as you would want to, the FAI having the same syntactic preference wants to perform the same actions as you would want to. The difference is that syntactic preference refers to actions (I/O), not to the world. But the outcome is exactly the same, if you manage to represent your preference in terms of your I/O.

I don't know what my preferences are until I resolve what "all possible mathematical structures" means exactly

You may still know the process of discovery that you want to follow while doing what you call getting to know your own preference. That process of discovery gives definition of preference. We don't need to actually compute preference in some predefined format, to solve the conceptual problem of defining preference. We only need to define a process that determines preference.

The second is that I don't see how you plan to differentiate within "syntactic preference", those that are true preferences, and those that are caused by computational limitations and/or hardware/software errors.

This issue is actually the last conceptual milestone I've reached on this problem, just a few days ago. The trouble is in how would the agent reason about the possibility of corruption of its own hardware. The answer is that human preference is to a large extent concerned with consequentialist reasoning about the world, so human preference can be interpreted as modeling the environment, including the agent's hardware. This is an informal statement, referring to the real world, but the behavior supporting this statement is also determined by formal syntactic preference that doesn't refer to the real world. Thus, just mathematically implementing human preference is enough to cause the agent to worry about how its hardware is doing (it isn't in any sense formally defined as its own hardware, but what happens in the agent's formal mind can be interpreted as recognizing the hardware's instrumental utility). In particular, this solves the issues of possible morally harmful impact of the FAI's computation (e.g. simulating tortured people and then deleting them from memory, etc.), and of upgrading the FAI beyond the initial hardware (so that it can safely discard the old hardware).

Comment author: Wei_Dai 06 April 2010 10:05:35PM *  2 points [-]

Once we implement this kind of FAI, how will we be better off than we are today? It seems like the FAI will have just built exact simulations of us inside itself (who, in order to work out their preferences, will build another FAI, and so on). I'm probably missing something important in your ideas, but it currently seems a lot like passing the recursive buck.

ETA: I'll keep trying to figure out what piece of the puzzle I might be missing. In the mean time, feel free to take the option of writing up your ideas systematically as a post instead of continuing this discussion (which doesn't seem to be followed by many people anyway).

Comment author: Vladimir_Nesov 06 April 2010 10:40:41PM *  2 points [-]

FAI doesn't do what you do; it optimizes its strategy according to preference. It's more able than a human to form better strategies according to a given preference, and even failing that it still has to be able to avoid value drift (as a minimum requirement).

Preference is never seen completely, there is always loads of logical uncertainty about it. The point of creating a FAI is in fixing the preference so that it stops drifting, so that the problem that is being solved is held fixed, even though solving it will take the rest of eternity; and in creating a competitive preference-optimizing agent that ensures the preference to fair OK against possible threats, including different-preference agents or value-drifted humanity.

Preference isn't defined by an agent's strategy, so copying a human without some kind of self-reflection I don't understand is pretty pointless. Since I never described a way of extracting preference from a human (and hence defining it for a FAI), I'm not sure where do you see the regress in the process of defining preference.

FAI is not built without exact and complete definition of preference. The uncertainty about preference can only be logical, in what it means/implies. (At least, when we are talking about syntactic preference, where the rest of the world is necessarily screened off.)

Comment author: andreas 07 April 2010 01:22:45AM *  2 points [-]

Since I never described a way of extracting preference from a human (and hence defining it for a FAI), I'm not sure where do you see the regress in the process of defining preference.

Reading your previous post in this thread, I felt like I was missing something and I could have asked the question Wei Dai asked ("Once we implement this kind of FAI, how will we be better off than we are today?"). You did not explicitly describe a way of extracting preference from a human, but phrases like "if you manage to represent your preference in terms of your I/O" made it seem like capturing strategy was what you had in mind.

I now understand you as talking only about what kind of object preference is (an I/O map) and about how this kind of object can contain certain preferences that we worry might be lost (like considerations of faulty hardware). You have not said anything about what kind of static analysis would take you from an agent's s̶t̶r̶a̶t̶e̶g̶y̶ program to an agent's preference.

Comment author: Vladimir_Nesov 07 April 2010 08:13:37AM *  3 points [-]

I now understand you as talking only about what kind of object preference is (an I/O map) and about how this kind of object can contain certain preferences that we worry might be lost (like considerations of faulty hardware).

Correct. Note that "strategy" is a pretty standard term, while "I/O map" sounds ambiguous, though it emphasizes that everything except the behavior at I/O is disregarded.

You have not said anything about what kind of static analysis would take you from an agent's strategy to an agent's preference.

An agent is more than its strategy: strategy is only external behavior, normal form of the algorithm implemented in the agent. The same strategy can be implemented by many different programs. I strongly suspect that it takes more than a strategy to define preference, that introspective properties are important (how the behavior is computed, as opposed to just what the resulting behavior is). It is sufficient for preference, when it is defined, to talk about strategies, and disregard how they could be computed; but to define (extract) a preference, a single strategy may be insufficient, it may be necessary to look at how the reference agent (e.g. a human) works on the inside. Besides, the agent is never given as its strategy, it is given as its source code that normalizes to that strategy, and computing the strategy may be tough (and pointless).

Comment author: Wei_Dai 22 April 2010 10:58:22AM 2 points [-]

After reading Nesov's latest posts on the subject, I think I better understand what he is talking about now. But I still don't get why Nesov seems confident that this is the right approach, as opposed to a possible one that is worth looking into.

You [Nesov] have not said anything about what kind of static analysis would take you from an agent's program to an agent's [syntactic] preference.

Do we have at least an outline of how such an analysis would work? If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?

Comment author: Vladimir_Nesov 22 April 2010 02:04:21PM *  2 points [-]

But I still don't get why Nesov seems confident that this is the right approach, as opposed to a possible one that is worth looking into.

What other approaches do you refer to? This is just the direction my own research has taken. I'm not confident it will lead anywhere, but it's the best road I know about.

Do we have at least an outline of how such an analysis would work? If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?

I have some ideas, though too vague to usefully share (I wrote about a related idea on the SIAI decision theory list, replying to Drescher's bounded Newcomb variant, where a dependence on strategy is restored from a constant syntactic expression in terms of source code). For "semantic preference", we have the ontology problem, which is a complete show-stopper. (Though as I wrote before, interpretations of syntactic preference in terms of formal "possible worlds" -- now having nothing to do with the "real world" -- are a useful tool, and it's the topic of the next blog post.)

At this point, syntactic preference (1) solves the ontology problem, (2) gives focus to investigation of what kind of mathematical structure could represent preference (strategy is a well-understood mathematical structure, and syntactic preference is something allowing to compute a strategy, with better strategies resulting from more computation), and (3) gives a more technical formulation of the preference extraction problem, so that we can think about it more clearly. I don't know of another effort towards clarifying/developing preference theory (that reaches even this meager level of clarity).

If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?

Returning to this point, there are two show-stopping problems: first, as I pointed out above, there is the ontology problem: even if humans were able to write out their preference, the ontology problem makes the product of such an effort rather useless; second, we do know that we can't write out our preference manually. Figuring out an algorithmic trick for extracting it from human minds automatically is not out of the question, hence worth pursuing.

P.S. These are important questions, and I welcome this kind of discussion about general sanity of what I'm doing or claiming; I only saw this comment because I'm subscribed to your LW comments.

Comment author: JGWeissman 06 April 2010 04:51:44PM 2 points [-]

You can do better than frequentist approach without using the "magic" universal prior. You can just use a prior that represents initial ignorance of the frequency at which the machine produces head-biased and tail-biased coins. (dP(f) = 1df). If you want to look for repeating patterns, you can assign probability (1/2)(1/2^n) to the theory that the machine produces each type of coin on a frequency depending on the last n coins it produced. This requires treating a probability as a strength of belief, and not the frequency of anything, which is what (as I understand it) frequentists are not willing to do.

Note the universal prior, if you can pull it off, is still better than what I described. The repeating pattern seeking prior will not notice, for example, if the machine makes head biased coins on prime-numbered trials, but tailbiased coins on composite-numbered trials. This is because it implicitly assigns probability 0 to that type of machine, which takes infinite evidence to update.

Comment author: JGWeissman 06 April 2010 04:00:08PM *  1 point [-]

BTW, I wish there is a "public drafts" feature on LessWrong, where I can make a draft accessible to others by URL, but not show up in recent posts, so I don't have to post a draft elsewhere to get feedback before I officially publish it.

I second this feature request.

ETA: I did not notice earlier Steve Rayhawk made the same comment.

Comment author: Steve_Rayhawk 06 April 2010 11:53:01AM 1 point [-]

I wish there is a "public drafts" feature on LessWrong

Seconded. See also JenniferRM on editorial-level versus object-level comments.

Comment author: Kevin 03 April 2010 09:40:17PM 6 points [-]

Applied rationality April Edition: convince someone with currently incurable cancer to sign up for cryonics: http://news.ycombinator.com/item?id=1239055

Hacker News rather than Reddit this time, which makes it a little easier.

Comment author: scotherns 14 April 2010 10:21:36AM 1 point [-]

I've been trying to do this since November for a close family member. So far the reaction has been fairly positive, but she has still not decided to go for it.

Comment author: NancyLebovitz 19 April 2010 01:11:29PM 5 points [-]

Karma creep: It's pleasant to watch my karma going up, but I'm pretty sure some of it is for old comments, and I don't know of any convenient way to find out which ones.

If some of my old comments are getting positive interest, I'd like to revisit the topics and see if there's something I want to add. For that matter, if they're getting negative karma, there may be something I want to update.

Comment author: RobinZ 19 April 2010 02:25:37PM 1 point [-]

The only way I know to track karma changes is having an old tab with my Recent Comments visible and comparing it to the new one. That captures a lot of the change - >90% - but not the old threads.

I would love to know how hard it would be to have a "Recent Karma Changes" feed.

Comment author: beriukay 05 April 2010 02:38:45PM 5 points [-]

Perhaps the folks at LW can help me clarify my own conflicting opinions on a matter I've been giving a bit of thought lately.

Until about the time I left for college, most of my views reflected those of my parents. It was a pretty common Republican party-line cluster, and I've got concerns that I have anchored at a point too close to favoring the death penalty than I should. I read studies about how capital punishment disproportionately harms minorities, and I think Robin Hanson had more to say about difference in social tier. Early in my college time, this sort of problem led me to reject the death penalty on practical grounds. Then, as I lost my religious views, I stopped seeing it as a punishment at all. I started to see it as a the same basic thing as putting down an aggressive dog. After all, dead people have a pretty encouraging recidivism rate.

I began to wonder if I could reject the death penalty on principle. A large swath of America believes that the words of the Declaration of Independence are as pertinent to our country as the Constitution. This would mean that we could disallow execution because it conflicts with our "inalienable" right to life. But then, I can't justify using the same argument as the people who try to prove that America is a Christian nation. As an interesting corollary, it seems that anyone citing the Declaration in this manner will have a very hard time also supporting the death penalty for this reason.

So basically, I think I would find the death penalty morally acceptable, but only in the hypothetical realm of virtual certainty that the inmate is guilty of a heinous crime. And I have no bound for what that virtual certainty is. Certainly a 5% chance of being falsely accused is too high. I wouldn't kill one innocent man to rid the world of 19 bad ones. But then, I would kill an innocent person to stop a billion headaches (an example I just read in Steven Landsburg's The Big Questions), so I obviously don't demand 100% certainty.

It seems like I might be asking: "What are the chances that someone was falsely accused, given that they were accused of an execution-worthy crime?" And a follow-up "What is an acceptable chance for killing an innocent person?"

Can Bayes help here? I am eager to hear some actual opinions on this matter. So far I've come up with precious little when talking to friends and family.

Comment author: Unnamed 06 April 2010 05:41:08AM *  9 points [-]

My take on capital punishment is that it's not actually that important an issue. With pretty much anything that you can say about the death penalty, you can say something similar about life imprisonment without parole (especially with the way that the death penalty is actually practiced in the United States). Would you lock an innocent man in a cell for the rest of his life to keep 19 bad ones locked up?

Virtually zero chance of recidivism? True for both. Very expensive? Check. Wrongly convicted innocent people get screwed? Check - though in both cases they have a decent chance of being exonerated after conviction before getting totally screwed (and thus only being partially screwed). Could be considered immoral to do something so severe to a person? Check. Deprives people of an "inalienable" right? Check (life/liberty). Strongly demonstrates society's disapproval of a crime? Check (slight edge to capital punishment, though life sentences would be better at this if the death penalty wasn't an option). Applied disproportionately to certain groups? I think so, though I don't know the research. Strong deterrent? It seems like the death penalty should be a bit stronger, but the evidence is unclear on that. Provides closure to the victim's family? Execution seems like more definitive closure, but they have to wait until years after sentencing to get it.

The criminal justice system is a big important topic, and I think it's too bad that this little piece of it (capital punishment) soaks up so much of our attention to it. Overall, my stance on capital punishment is ambivalent, leaning against it because it's not worth the trouble, though in some cases (like McVeigh) it's nice to have around and I could be swayed by a big deterrent effect. I'd prefer for more of the focus to be on this sort of thing (pdf).

Comment author: Kevin 06 April 2010 05:46:37AM *  2 points [-]

Good post. I have never seen strong evidence that the death penalty has a meaningful deterrent effect but I'd be curious to see links one way or the other.

I lean towards prison abolition, but it's an idealistic notion, not a pragmatic one. I suppose we could start by getting rid of prisons for non-violent crimes and properly funding mental hospitals. http://en.wikipedia.org/wiki/Prison_abolition_movement I can't see that happening when we can't even decriminalize marijuana.

Comment author: Morendil 05 April 2010 05:29:54PM *  3 points [-]

The more judicious question, I am coming to realize, isn't so much "Which of these two Standard Positions should I stand firmly on".

The more useful question is, why do the positions matter? Why is the discussion currently crystallized around these standard positions important to me, and how should I fluidly allow whatever evidence I can find to move me toward some position, which is rather unlikely (given that the debate has been so long crystallized in this particular way) to be among the standard ones. And I shouldn't necessarily expect to stay at that position forever, once I have admitted in principle that new evidence, or changes in other beliefs of mine, must commit me to a change in position on that particular issue.

In the death-penalty debate I identify more strongly with the "abolitionist" standard position because I was brought up in an abolitionist country by left-wing parents. That is, I find myself on the opposite end of the spectrum from you. And yet, perhaps we are closer than is apparent at first glance, if we are both of us committed primarily to investigating the questions of values, the questions of fact, and the questions of process that might leave either or both of us, at the end of the inquiry, in a different position than we started from.

  • Would I revise my "in principle" opposition to the death penalty if, for instance, the means of "execution" were modified to cryonic preservation? Would I then support cryonic preservation as a "punishment" for lesser crimes such as would currently result in lifetime imprisonment?

  • Would I still oppose the death penalty if we had a Truth Machine? Or if we could press Omega into service to give us a negligible probability of wrongful conviction? Or otherwise rely on a (putatively) impartial means of judgment which didn't involve fallible humans? Is that even desirable, if it was at all possible?

  • Would I support the death penalty if I found out it was an effective deterrent, or would I oppose it only if I found that it didn't deter? Does deterrence matter? Why, or why not?

  • How does economics enter into such a decision? How much, whatever position I arrive at, should I consider myself obligated to actively try to ensure that the society I live in espouses that position? For what scope of "the society I live in" - how local or global?

Those are topics and questions I encounter in the process of thinking about things other than the death penalty; practically every important topic has repercussions on this one.

There's an old systems science saying that I think applies to rational discussions about Big Questions such as this one: "you can't change just one thing". You can't decide on just one belief, and as I have argued before, it serves no useful purpose to call an isolated belief "irrational". It seems more appropriate to examine the processes whereby we adjust networks of beliefs, how thoroughly we propagate evidence and argument among those networks.

There is currently something of a meta-debate on LW regarding how best to reflect this networked structure of adjusting our beliefs based on evidence and reasoning, with approaches such as TakeOnIt competing against more individual debate modeling tools, with LessWrong itself, not so much the blog but perhaps the community and its norms, having some potential to serve as such a process for arbitrating claims.

But all these prior discussions seem to take as a starting point that "you can't change just one belief". That's among the consequences of embracing uncertainty, I think.

Comment author: Rain 05 April 2010 05:42:45PM 1 point [-]

Yeah, that's why I try to avoid hot topics. Too much work.

Comment author: Morendil 05 April 2010 06:01:32PM 1 point [-]

Well, even relatively uncontroversial topics have the same entangled-with-your-entire-belief-network quality to them, but (to most people) less power to make you care.

The judicious response to that is to exercise some prudence in the things you choose to care about. If you care too much about things you have little power to influence and could easily be wrong about, you end up "mind-killed". If you care too little and about too few things except for basic survival, you end up living the kind of life where it makes little difference how rational you are.

The way it's worked out for me is that I've lived through some events which made me feel outraged, and for better or for worse the outrage made me care about some particular topics, and caring about these topics has made me want to be right about them. Not just to associate myself with the majority, or with a set of people I'd pre-determined to be "the right camp to be in", but to actually be right.

Comment author: Rain 05 April 2010 02:50:45PM *  3 points [-]

Standard response: politics is the mind-killer.

Personal response: I'm opposed to the death penalty because it costs more than putting them in prison for life due to the huge number of appeals they're allowed (vaguely recall hearing in newspapers / reports). I feel the US has become so risk-averse and egalitarian that it cannot properly implement a death penalty. This is reflected in the back-and-forth questions you ask.

I also oppose it on the grounds that it is often used as a tool of vengeance rather than justice. Nitrogen poisoning (I think that was the gas they were talking about) is a safe, highly reliable, and euphoric means of death, but the US still prefers electrocution (can take minutes), injection (can feel like the veins are burning from the inside out while the body is paralyzed), etc.

That said, I don't care enough about the topic to try and alter its use, whether through voting, polling, letters, etc, nor do I desire to put much thought into it. Best to let hot topics alone.

And after asking about Bayes, you should ask for math rather than opinions.

Comment author: Kevin 06 April 2010 05:31:21AM *  2 points [-]

There is strong Bayesian evidence that the USA has executed one innocent man. http://en.wikipedia.org/wiki/Cameron_Todd_Willingham By that I mean that an Amanda Knox test type analysis would clearly show that Willingham is innocent, probably with greater certainty than when the Amanda Knox case was analyzed. Does knowing that the USA has indeed provably executed an innocent person change your opinion?

What are the practical advantages of death over life in prison? US law allows for true life without parole. Life in an isolated cell in a Supermax prison is continual torture -- it is not a light punishment by any means. Without a single advantage given for the death penalty over life in prison without parole, I think that ~100% certainty is needed for execution.

I am against the death penalty for regular murder and mass murder and aggravated rape. I am indifferent with regards to the death penalty for crimes against humanity as I recognize that symbolic execution could be appropriate for grave enough crimes.

Comment author: beriukay 06 April 2010 11:12:06AM 1 point [-]

Kevin, thank you for the specific example. It definitely strengthened my practical objection to the practice. I strongly suspect that the current number of false positives lies outside of my acceptance zone.

Rain, I agree that politics is a mind-killer, but thought it worthy of at least brushing the cobwebs off some cached thoughts. Good point about Nitrogen. I wonder why we choose gruesome methods when even CO would be cheap, easy and effective.

Morendil, I appreciate the other questions. You have a good point that if Omega were brought in on the justice system, it would definitely find better corrective measures than the kill command. I think Eliezer once talked about how predicting your possible future decisions is basically the same as deciding. In that case, I already changed many things on this Big Question, and am just finally doing what I predicted I might do last time I gave any thought to capital punishment. Which happened to be at the conclusion (if there is such a thing) of a murder trial where my friend was a victim. Lots of bias to overcome there, methinks.

Unnamed, interesting points. I hadn't actually considered how similar life imprisonment is to execution, with regard to the pertinent facts. I was recently introduced to the concept of restorative justice which I think encompasses your article. I find it particularly appealing because it deals with what works, instead of worthless Calvinist ideals like punishment. From my understanding, execution only fulfills punishment in the most trivial of senses.

Comment author: wedrifid 06 April 2010 07:13:39AM 1 point [-]

I am against the death penalty for regular murder and mass murder and aggravated rape. I am indifferent with regards to the death penalty for crimes against humanity as I recognize that symbolic execution could be appropriate for grave enough crimes.

"Crimes against humanity" is one of the crimes that for most practical purposes means "... and lost".

Comment author: Yvain 01 April 2010 08:11:37PM *  5 points [-]

The London meet is going ahead. Unless someone proposes a different time, or taw's old meetings are still going on and I just didn't know about them, it will be:

5th View cafe on top of Waterstone's bookstore near Piccadilly Circus Sunday, April 4 at 4PM

Roko, HumanFlesh, I've got your numbers and am hoping you'll attend and rally as many Londoners as you can.

EDIT: Sorry, Sunday, not Monday.

Comment author: ciphergoth 02 April 2010 09:56:28AM 3 points [-]

Found this entirely by chance - do a top level post?

Comment author: Eliezer_Yudkowsky 02 April 2010 10:57:33PM 1 point [-]

Do a top-level post.

Comment author: ciphergoth 03 April 2010 12:45:59PM 1 point [-]

Done. I hesitated as I wasn't in any sense the organiser of this event, just someone who had heard about it, but better me than no-one!

Comment author: PhilGoetz 01 April 2010 09:40:57PM *  11 points [-]

I have a couple of problems with anthropic reasoning, specifically the kind that says it's likely we are near the middle of the distribution of humans.

First, this relies on the idea that a conscious person is a random sample drawn from all of history. Okay, maybe; but it's a sample size of 1. If I use anthropic reasoning, I get to count only myself. All you zombies were selected as a side-effect of me being conscious. A sample size of 1 has limited statistical power.

ADDED: Although, if the future human population of the universe were over 1 trillion, a sample size of 1 would still give 99% confidence.

Second, the reasoning requires changing my observation. My observation is, "I am the Xth human born." The odds of being the 10th human and the 10,000,000th human born are the same, as long as at least 10,000,000 humans are born. To get the doomsday conclusion, you have to instead ask, "What is the probability that I was human number N, where N is some number from 1 to X?" What justifies doing that?

Comment author: beriukay 14 April 2010 08:51:48AM 4 points [-]

A recent study (hiding behind a paywall) indicates people overestimate their ability to remember and underestimate the usefulness of learning. More ammo for the sophisticated arguer and the honest enquirer alike.

Comment author: Risto_Saarelma 21 April 2010 12:44:28PM 2 points [-]

Available without the paywall from the author's home page.

Comment author: NancyLebovitz 14 April 2010 08:55:53AM 1 point [-]

It's also an argument in favor of using checklists.

Comment author: Kevin 04 April 2010 10:01:01PM 4 points [-]

US Government admits that multiple-time convicted felon Pfizer is too big to fail. http://www.cnn.com/2010/HEALTH/04/02/pfizer.bextra/index.html?hpt=Sbin

Did the corporate death penalty fit the crime(s)? Or, how can corporations be held accountable for their crimes when their structure makes them unpunishable?

Comment author: Amanojack 05 April 2010 11:27:34AM 2 points [-]

The causes of "too big to fail" are:

  1. Corporate personhood laws makes it harder to punish the actual people in charge.

  2. Problems in tort law (in the US) make it difficult to sue corporations for certain kinds of damages.

  3. A large government (territorial monopoly of jurisdiction) makes it more profitable for any sufficiently large company to use the state as a bludgeon against its competitors (lobbying, bribes, friends in high places) instead of competing directly on the market.

  4. Letting companies that waste resources go bankrupt causes short-term damage to the economy, but it is healthy in the long term because it allows more efficient companies to take over the tied-up talent and resources. Politicians care more about the short term than the long term.

  5. For pharmaceutical companies there is an additional embiggening factor. Testing for FDA drug approval costs millions of dollars, which constitutes a huge barrier to entry for smaller companies. Hence the large companies can grow larger with little competition. This is amplified by 1 and 2, and 3 suggests that most of the competition among Big Pharma is over legislators and regulators, not market competition.

Disclosure: I am a "common law" libertarian (I find all monopolies counterproductive, including state governments).

Comment author: NancyLebovitz 05 April 2010 01:39:48PM 3 points [-]

I'd add trauma from the Great Depression (amplified by the Great Recession) which means that any loss of jobs sounds very bad, and (not related to the topic but a corollary) anything which creates jobs can be made to sound good.

Comment author: taw 04 April 2010 02:15:07AM 4 points [-]

Is there any evidence that Bruce Bueno de Mesquita is anything else than a total fraud?

Am I missing something here?

Comment author: Amanojack 01 April 2010 08:03:23PM 4 points [-]

Why doesn't brain size matter? Why is a rat with its tiny brain smarter than a cow? Why does the cow bother devoting all those resources to expensive gray matter? Eliezer posted this question in the February Open Topic, but no one took a shot at it.

FTA: "In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter."

This statement seems ripe for semantic disambiguation. Cows can "afford" a larger brain than rats can, and although "large cow brain < small rat brain", it seems highly likely that "large cow brain > small cow brain". The fact that a large cow brain is wildly inefficient compared to a more optimized smaller brain is irrelevant to natural selection, a process that "search[es] the immediate neighborhood of its present point in the solution space, over and over and over." It's not as if cow evolution is an intelligent being that can go take a peek at rat evolution and copy its processes.

Still, why don't we see such apparent resource-wasting in other organs? My guess is that the brain is special, in that

1) As with other organs, it seems plausible that the easiest/fastest "immediate neighbor" adaptation to selective pressure on a large animal to acquire more intelligence is simply to grow a larger brain.

2) But in contrast with other organs, if a larger brain is very expensive (hard for the rat to fit into tight places, scampers slower, requires much more food), there are other ways to dramatically improve brain performance - albeit ones that natural selection may be slower to hit upon. Why slower? Presumably because they are more complex, less suited to an "immediate neighbor" search, more suited to an intelligent search or re-design. (The evolution process would be even slower in large animals with longer life cycles.)

I bolded "dramatically" because the possibility of substantial intelligence gains by code optimization alone (without adding parallel processors, for instance) also seems to be a key factor in the AI "FOOM" argument. Maybe that's a clue.

Comment author: rwallace 02 April 2010 10:36:23AM 4 points [-]

Be careful about making assumptions about the intelligence of cows. I used to think sheep were stupid, then I read that sheep can tell humans apart by sight (which is more than I can do for them!), and I realized on reflection I never had any actual reason to believe sheep were stupid, it was just an idea I'd picked up and not had any reason to examine.

Also, be careful about extrapolating from the intelligence of domestic cows (which have lived for the last few thousand years with little evolutionary pressure to get the most out of their brain tissue) to the intelligence of their wild relatives.

Comment author: Bo102010 02 April 2010 12:25:47PM 2 points [-]

I'm not sure if it's useful to speak of a domesticated animal's raw "intelligence" by citing how they interact with humans.

"Little evolutionary pressure" means "little NORMAL evolutionary pressure" for animals protected by humans. That is, surviving and propagating is less about withstanding normal natural situations, and more about successfully interacting with humans.

So, sheep/cows/dogs/etc. might have pools of genius in the area of "find a human that will feed you," and may be really dumb in almost other areas.

Comment author: JamesAndrix 01 April 2010 09:30:47PM 3 points [-]

At the risk of repeating the same mistake as my previous comment, I'll do armchair genetics this time:

Perhaps genes controlling the size of various mammalian organs and body regions tend to grow or shrink uniformly, and only become disproportionate when there is a stronger evolutionary pressure. When there is a mutation leading to more growth, all the organs tend to grow more.

Comment author: JamesAndrix 01 April 2010 09:21:12PM 1 point [-]

(I now see this answered in the first few comments on the link eliezer posted.)

Purely armchair neurology: To answer the question of why cow brains would need to be bigger than rat brains, I asked what would go wrong if we put a rat brain into a cow. (Ignoring organ rejection and cheese crazed, wall-eating cows)

We would need to connect the rat brain to the cow body, but there would not be a 1 to 1 correspondence of connections. I suspect that a cow has many more nerve endings throughout it's body. At least some of the brain/body correlation must be related to servicing the body nerves. (both sensory and motor)

Comment author: PhilGoetz 01 April 2010 09:46:30PM 4 points [-]

The cow needs more receptors, and more activators. However, this would lead one to expect the relationship of brain size to body size to follow a power-law with an exponent of 2/3 (for receptors, which are primarily on the skin); or of 1 (for activators, which might be in number proportional to volume). The actual exponent is 3/4. Scientists are still arguing over why.

Comment author: Erik 06 April 2010 07:34:17AM *  3 points [-]

West and Brown has done some work on this which seemed pretty solid to me when I read it a few months ago. The basic idea is that biological systems are designed in a fractal way which messes up the dimensional analysis.

From the abstract of http://jeb.biologists.org/cgi/content/abstract/208/9/1575:

We have proposed a set of principles based on the observation that almost all life is sustained by hierarchical branching networks, which we assume have invariant terminal units, are space-filling and are optimised by the process of natural selection. We show how these general constraints explain quarter power scaling and lead to a quantitative, predictive theory that captures many of the essential features of diverse biological systems. Examples considered include animal circulatory systems, plant vascular systems, growth, mitochondrial densities, and the concept of a universal molecular clock. Temperature considerations, dimensionality and the role of invariants are discussed. Criticisms and controversies associated with this approach are also addressed.

A Science article of theirs containing similar ideas: http://www.sciencemag.org/cgi/content/abstract/sci;284/5420/1677

Edit: A recent Nature article showing that there is systematic deviations from the power law, somewhat explainable with a modified version of the model of West and Brown:

http://www.nature.com/nature/journal/v464/n7289/abs/nature08920.html

Comment author: Oscar_Cunningham 01 April 2010 05:19:58PM *  4 points [-]

My parents are both vegetarian, and have been since I was born. They brought me up to be a vegetarian. I'm still a vegetarian. Clearly I'm on shaky ground, since my beliefs weren't formed from evidence, but purely from nurture.

Interestingly my parents became vegetarian because they perceived the way animals were farmed to be cruel (although they also stopped eating non-farmed animals such as fish), however my rationalization for not eating meat is that it is the killing of animals that is wrong (generalising from the belief that killing humans is worse than mistreating them). Since eating meat is not necessary to live, it must therefore be as bad as hunting for fun, which is much more widely disapproved of. (I'm not a vegan, and I often eat sweets containing gelatine, if asked to explain this, I would rationalise that eating these thing causes the death of many fewer animals than actually eating, like, steak).

But having read all of Eliezer's posts, I now realise that I could have come up with that rationalisation even if eating meat were not wrong, and that I'm now in just a bad a position as a religious believer. I want a crisis of faith, but I have a problem... I don't know where to go back to. There's no objective basis for morality. I don't know what kind of evidence I should condition on (I don't know what would be different about the world if eating meat was good instead of bad). If a religious person realises they have no evidence they should go back to their priors. Because god has a tiny prior, they should immediately stop believing. I don't know exactly what the prior on "killing animals is wrong" is, but I think it has a reasonable size (certainly larger than that for god), and I feel more justified in being vegetarian because of this. What should I do now?

Footnote: I probably don't have to say this, but I don't want arguments for or against vegetarianism, simply advice on how one should challenge one's own moral beliefs. I've used "eating meat" and "killing animals" interchangeably in my post, because I think that they are morally equivalent due to supply and demand.

Comment author: Bongo 01 April 2010 06:23:25PM *  7 points [-]

I hope this isn't a vegatarianism argument, but remember that you have to rehabilitate both killing and cruelty to justify eating most meat, even if killing alone has held you back so far.

Comment author: Oscar_Cunningham 01 April 2010 06:51:00PM 3 points [-]

That's an excellent point, and one I may not have spotted otherwise. Thank you.

Comment author: Alicorn 01 April 2010 06:10:07PM 6 points [-]

Do you want to eat meat?

Or do you just want to have a good reason for not wanting to eat meat?

It's... y'know... food. I don't have an ethical objection to peppermint but I don't eat it because I don't want to.

Comment author: [deleted] 02 April 2010 02:41:59PM *  2 points [-]

.

Comment author: Jayson_Virissimo 02 April 2010 06:45:19PM *  1 point [-]

What is worse? Death, or a life of pain?

Is a state of nonexistence(death) truly a negative, or is it the most neutral of all states?

If Omega told me that the rest of my life would be more painful than it was pleasant I would still choose to live. I think most others here would choose similarly (except in cases of extreme pain like torture).

Comment author: cupholder 01 April 2010 07:21:07PM 1 point [-]

I don't know exactly what the prior on "killing animals is wrong" is, but I think it has a reasonable size (certainly larger than that for god), and I feel more justified in being vegetarian because of this.

Is it meaningful to put a probability on 'killing animals is wrong' and absolute moral statements like that? Feels like trying to put a probability on 'abortion is wrong' or 'gun control is wrong' or '(insert your pet issue here) is wrong/right' or...

Comment author: wnoise 01 April 2010 05:05:49PM *  9 points [-]

Some fantastic singularity-related jokes here:

http://crisper.livejournal.com/242730.html

Comment author: Mass_Driver 01 April 2010 05:21:36PM 2 points [-]

Voted up for having jokes with cautionary power, and not just amusement value.

Comment author: RichardKennaway 20 April 2010 07:20:27PM *  3 points [-]

Does brain training work? Not according to an article that has just appeared in Nature. Paper here, video here or here.

These results provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults. This was true for both the ‘general cognitive training’ group (experimental group 2) who practised tests of memory, attention, visuospatial processing and mathematics similar to many of those found in commercial brain trainers, and for a more focused training group (experimental group 1) who practised tests of reasoning, planning and problem solving. Indeed, both groups provided evidence that training-related improvements may not even generalize to other tasks that use similar cognitive functions.

Note that they were specifically looking for transfer effects. The specific tasks practised did themselves show improvements.

Comment author: NancyLebovitz 06 April 2010 02:48:05PM *  3 points [-]

Rats have some ability to distinguish between correlation and cauation

To get back to the rat study—it's very simple actually. What I did is: I had the rats learn that a light, a little flashing light in a Pavlovian box, is followed sometimes by a tone and sometimes by food. So they might have used Pavlovian conditioning; just as I said, Pavlovian conditioning might be the substrate by which animals learn to piece together spatial maps and maybe causal maps as well. If they treat the light as a common cause of the tone and of food, they see [hear] the tone and they predict food might happen. Just like if you see the barometer drop then you think, "Oh, the storm might happen." But, if you see someone tamper with the barometer and you know that the barometer and the storm aren't causally related, then you won't think that the weather is going to change. So, the question is, if the rat intervenes to make the tone happen, will it now no longer think the food will occur.

So there were a bunch of rats; they all had the same training—light as an antecedent to tone and food. Then, at test, some of the rats got tone and they tended to go look in the food section. So they were expecting food based on the tone—which humans would says is a diagnostic reasoning process. “Tone is there because light causes tone and light also causes food. Oh, there must be food.” Or, it's just second order Pavlovian conditioning. The critical test was with another group of rats that got the same training. We gave them a lever that they had never had before. They were in this box, and they have a lever that is rigged so that if they press the lever the tone will immediately come up. So now the question is, do the rats attribute that tone to being caused by themselves. That is, did they intervene to make that variable change? If they thought that they were the cause of the tone, that means it couldn't have been the light, therefore the other effects of the light, food, would not have been expected. In that case, the intervening rats, after hearing the tone of their own intervention, should not expect food. Indeed, they didn't go to food nearly as much. That is the essence of the finding and how it fits in with this idea of causal models and how we go about testing our world.

the abstract

Comment author: CronoDAS 05 April 2010 02:10:14AM *  3 points [-]

My mother's sister has two children. One is eleven and one is seven. They are both being given an unusually religious education. (Their mother, who is Catholic, sent them to a prestigious Jewish pre-school, and they seem to be going through the usual Sunday School bullshit.) I find this disturbing and want to proselytize for atheism to them. Any advice?

ETA: Their father is non-religious. I don't know why he's putting up with this.

Comment author: Unnamed 06 April 2010 06:03:55AM 5 points [-]

I wouldn't proselytize too directly - you want to stay on their (and their mother's) good side, and I doubt it would be very effective anyways. You're better off trying to instill good values - open-mindedness, curiosity, ability to think for oneself, and other elements of rationality & morality - rather than focusing on religion directly. Just knowing an atheist (you) and being on good terms with him could help lead them to consider atheism down the road at some point, which is another reason why it's important to maintain a good relationship. Think about the parallel case of religious relatives who interfere with parents who are raising their kids non-religiously - there are a lot of similarities between their situation and yours (even though you really are right and they just think they are) and you could run into a lot of the same problems that they do.

I haven't had the chance to try it out personally, but Dale McGowan's blog seems useful for this sort of thing, and his books might be even more useful.

Comment author: sketerpot 07 April 2010 08:38:10PM 2 points [-]

I think that's some very good advice, and I'd like to elaborate a bit. The thing that made me ditch my religion was the fact that I already had a secular, socially liberal, science-friendly worldview, and it clashed with everything they said in church. That conflict drove my de-conversion, and made it easier for me to adjust to atheism. (I was even used to the idea, from most of my favorite authors mentioning that they weren't religious. Harry Harrison, in particular, had explicitly atheistic characters as soon as his publishers would let him.)

So, yeah, subtlety is your friend here.

Comment author: RobinZ 05 April 2010 11:28:23AM 3 points [-]

Dangerous situation!

How do the parents feel about science and science fiction? I believe that stuff has good effects.

Comment author: Kevin 05 April 2010 02:34:43AM 3 points [-]

One thing to do is make sure the kids understand that the Bible is just a bunch of stories. My mom teaches Reform Jewish Sunday school and makes this clear to her students. I make fun of her for cranking out little atheists.

Teaching that the bible is a bunch of stories written by multiple humans over time is not nearly as offensive as preaching atheism. Start there. This bit of knowledge should be enough to get your young relatives thinking about religion, if they want to start thinking about it.

Comment author: NancyLebovitz 05 April 2010 10:46:49PM 2 points [-]

I'm not speaking from experience here, but that doesn't stop me from having opinions.

I don't believe this is an emergency. Are the kid's lives being affected negatively by the religion? What do they think of what they're being taught?

Actually, this could be an emergency if they're being taught about Hell. Are they? Is it haunting them?

Their minds aren't a battlefield between you and religious school-- what they believe is, well not exactly their choice because people aren't very good at choosing, but more their choice than yours.

I recommend teaching them a little thoughtful cynicism, with advertisements as the subject matter.

Comment author: CronoDAS 06 April 2010 02:03:15PM *  1 point [-]

Actually, this could be an emergency if they're being taught about Hell. Are they? Is it haunting them?

I haven't seen any evidence that they're being bothered by anything.

Mostly, I just want to make it clear that, unlike a lot of other things they're learning in school, there are a lot of people who have good reasons to think the stories aren't true - to make it clear that there's a difference between "Moses led the Jews out of Egypt" and "George Washington was the first President of the United States."

Comment author: wedrifid 07 April 2010 10:21:12PM *  1 point [-]

Introduce them to really cool, socially near, atheists. In particular, provide contact with attractive opposite-gender children who are a couple of years older and are atheists.

Comment author: [deleted] 05 April 2010 08:15:58PM 1 point [-]

Teach them the basis of bayesian reasoning without any connection to religion. This will help them in more ways and will lay the foundation for later when they naturally start questioning religion. Also their parents wont have anything against it you merely introduce it as a method for physics or chemistry or with the standard medical examples.

Comment author: Amanojack 06 April 2010 04:25:59PM *  1 point [-]

Possibly introducing them to some of the content in A Human's Guide to Words, such as dissolving the question, would lead them to theological noncognitivism. The nice thing about that as opposed to direct atheism is it's more "insidious" because instead of saying, "I don't believe" the kids would end up making more subtle points, like, "What do you even mean by omnipotent?" This somehow seems a lot less alarming to people, so it might bother the parents much less, or even seem like "innocent" questioning.

Comment author: Vladimir_Nesov 02 April 2010 09:50:32AM *  3 points [-]

David Chalmers has written up a paper based on the talk he gave at 2009 Singularity Summit:

From the blog post where he announced the paper:

The main focus is the intelligence explosion that some think will happen when machines become more intelligent than humans. First, I try to clarify and analyze the argument for an intelligence explosion. Second, I discuss strategies for negotiating the singularity to maximize the chances of a good outcome. Third, I discuss issues regarding uploading human minds into computers, focusing on issues about consciousness and personal identity.

Comment author: timtyler 02 April 2010 12:18:08PM *  1 point [-]

Rather sad to see Chalmers embracing the dopey "singularity" terminology.

He seems to have toned down his ideas about development under conditions of isolation:

"Confining a superintelligence to a virtual world is almost certainly impossible: if it wants to escape, it almost certainly will."

Still, the ideas he expresses here are not very realistic, IMO. People want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but of course we won't keep these things permanently restrained on grounds of sheer paranoia - that would stop us from using them.

53 pages with only 2 mentions of zombies - yay.

Comment author: Kutta 01 April 2010 08:16:33PM *  3 points [-]

PDF: "Are black hole starships possible?"

This paper examines the possibility of using miniature black holes for converting matter to energy via Hawking radiation, and propelling ships with that. Pretty interesting, I think.

I'm no physicist and not very math literate, but there is one issue I pondered: namely, how the would it be possible to feed matter to a mini black hole that has an attometer scale event horizon and radiating petajoules of energy in all directions? The black hole would be an extremely tiny target in a barrier of ridiculous energy density. The paper, as rudimentary it is, does not discuss this feeding issue.

Comment author: JenniferRM 02 April 2010 07:51:05PM *  3 points [-]

This might be interesting in combination with the a "balanced drive". They were invented by science fiction author Charles Sheffield who attributed them his character Arthur Morton McAndrew so they are sometimes also called a "McAndrew Drive" or a "Sheffield Drive".

The basic trick is to put an incredibly dense mass at the end of a giant pole such that the inverse square law of gravity is significant along the length of the pole. The ship flies "mass forward" through space. Then the crew cabin (and anything else incapable of surviving enormous acceleration) is set up on the pole so that the faster the acceleration the closer it is to the mass. The cabin, flying "floor forward", changes its position while the floor flexes as needed so that the net effect of the ship's acceleration plus the force of gravity balance out to something tolerable. When not under acceleration you still get gravity in the cabin by pushing it out to very tip of the pole.

The literary value of the system is that you can do reasonably hard science fiction and still have characters jaunt from star to star so long as they are willing to put up with the social isolation because of time dilation, but the hard part is explaining what the mass at the end of the pole is, and where you'd get the energy to move it.

If you could feed a black hole enough to serve as the mass while retaining the ability to generate Hawking radiation, that might do it. Or perhaps simply postulating technological control of quantum black holes and then use two in your ship: a big one to counteract acceleration and a small one to get energy from a "Crane-Westmoreland Generator".

Comment author: wnoise 01 April 2010 08:53:28PM 3 points [-]

I prefer links to the abstract, when possible.

http://arxiv.org/abs/0908.1803

Comment author: Rain 01 April 2010 03:38:34PM *  3 points [-]

What do you value?

Here are some alternate phrasings in an attempt to find the same or similar reasoning (it is not clear to me whether these are separate concepts):

  • What are your preferences?
  • How do you evaluate your actions as proper or improper, good or bad, right or wrong?
  • What is your moral system?
  • What is your utility function?

Here's another article asking a similar question: Post Your Utility Function. I think people did a poor job answering it back then.

Comment author: Rain 01 April 2010 04:48:53PM *  3 points [-]

I value empathy. Unfortunately, it's a highly packed word in the way I use it.

Attempting a definition, I'd say it involves creating the most accurate mental models of what people want, including oneself, and trying to satisfy those wants. This makes it a recursive and recursively self-improving model (I think), since one thing I want is to know what else I, and others, want. To satisfy that want, I have to constantly get better at want-knowing.

The best way to determine and to satisfy these preferences appears to be through the use of rationality and future prediction, creating maps of minds and chains of causality, so I place high value on those skills. Without the ability to predict the future or map out minds, "what people want" becomes far too close to wireheading or pure selfishness.

Empathy, to me, involves trying to figure out what the person would truly want, given as much understanding and knowledge of the consequences as possible, contrasting with what they say they want.

Comment author: Clippy 01 April 2010 05:08:15PM 3 points [-]

Take a wild, wild guess.

No rush -- I'll wait.

Comment author: Rain 01 April 2010 05:39:53PM *  6 points [-]

I would guess "paperclips and things which are paperclippy", but that still leaves many open questions.

Is 100 paperclips which last for 100 years better than 1 paperclip which lasts for 100,000 years?

How about one huge paperclip the size of a planet? Is that better or worse than a planetary mass turned into millimeter sized paperclips?

Or maybe you could make huge paperclippy-shapes out of smaller paperclips: using paperclip-shaped molecules to form tiny paperclips which you use to make even bigger paperclips. But again, how long should it last? Would you create the most stable paperclips possible, or the most paperclippy paperclips possible?

And how much effort would you put into predicting and simplifying the future (modeling, basic research, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips? You could spend your entire existence in the quest for the definition to ultimate paperclippiness...

Comment author: Clippy 01 April 2010 06:09:09PM 11 points [-]

Well, User:Rain, that's about the story of my existence right there. What kinds of paperclips are the right ones? What tradeoffs should I make?

However, regarding the specific matters you bring up, they are mostly irrelevant. Yes, there could be some conceivable situation in which I have to trade off paperclips now against paperclips later. But the way it usually works is that once I obtain or make a paperclip, I move it to the safe zone, where I'll pretty much have it forever. Also, it's obviously the number of paperclips that matters, and the constraint on bizarre paperclips is obviously that they have to be able to (counterfactually) hold sheets of paper together.

If you want to get past this abstract philosophizing and on to some concrete problems, it would be better to talk about the dilemma that User:h-H posed to me, in which I must consider alternate models of paperclipping that don't have the shape of standard paperclips. Here's my recent progress on thinking about the issue.

My current difficulty is extrapolating my values to cover unexpected situations like this, starting from the simplest algorithm I can find which generates my current preference. The problem is that I can't think of any simple algorithm that would favor larger numbers of paperclips, but not have them used for anything else, like holding paper together, or unbending them to connect objects (please pardon the repulsive imagery).

So, I'm beginning to think that at least some of the universe should be paper, but I find that quite a repugnant conclusion. It's a work in progress.

Comment author: cousin_it 02 April 2010 12:22:59AM *  1 point [-]

How do you evaluate your actions as proper or improper, good or bad, right or wrong?

I don't fully understand how I tell good from bad. A query goes in, an answer pops out in the form of a feeling. Many of the criteria probably come from my parents, from reading books, and from pleasant/unpleasant interactions with other people. I can't boil it down to any small set of rules that would answer every moral question without applying actual moral sense, and I don't believe anyone else can.

It's easier to give a diff, to specify how my moral sense differs from that of other people I know. The main difference I see is that some years ago I deeply internalized the content of Games People Play and as a result I never demonstrate to anyone that I feel bad about something - I now consider this a grossly immoral act. On the other hand, I cheat on women a lot and don't care too much about that. In other respects I see myself as morally average.

Comment author: NancyLebovitz 04 April 2010 09:16:51AM 2 points [-]

How has not demonstrating to people that you feel bad about something worked out for you?

Comment author: SforSingularity 03 April 2010 03:51:10PM *  4 points [-]

As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.

Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.

Discuss.

Comment author: Mass_Driver 04 April 2010 06:34:59AM 3 points [-]

You try frantically to tell people about this, and it always seems to go badly for you.

Telling people frantically about problems that are not on a very short list of "approved emergencies" like fire, angry mobs, and snakes is a good way to get people to ignore you, or, failing that, to dislike you.

It is only very recently (in evolutionary time) that ordinary people are likely to find important solutions to important social problems in a context where those solutions have a realistic chance of being implemented. In the past, (a) people were relatively uneducated, (b) society was relatively simpler, and (c) arbitrary power was held and wielded relatively more openly.

Thus, in the past, anyone who was talking frantically about social reform was either hopelessly naive, hopelessly insane, or hopelessly self-promoting. There's a reason we're hardwired to instinctively discount that kind of talk.

Comment author: Rain 03 April 2010 07:08:57PM *  1 point [-]

You should present the easily implemented, obviously better solution at the same time as the problem.

If the solution isn't easy to implement by the person you're talking to, then cost/benefit analysis may be in favor of the status quo or you might be talking to the wrong person. If the solution isn't obviously better, then it won't be very convincing as a solution or you might not have considered all opinions on the problem. And if there is no solution, then why complain?

Comment author: Peter_de_Blanc 07 April 2010 02:05:20AM 2 points [-]

I'd like to plug a facebook group:

Once we reach 4,096 members, everyone will donate $256 to SingInst.org.

Folks may also be interested in David Robert's group:

1 million people, $100 million to defeat aging.

Comment author: humpolec 06 April 2010 10:42:37PM 2 points [-]
Comment author: Mass_Driver 04 April 2010 06:26:18AM *  2 points [-]

Does anyone have suggestions for how to motivate sleep? I've hacked all the biological problems so that I can actually fall asleep when I order it, but me-Tuesday generally refuses to issue an order to sleep until it's late enough at night that me-Wednesday will sharply regret not having gone to bed earlier.

I've put a small effort into setting a routine, and another small effort into forcing me-Tuesday to think about what I want to accomplish on Wednesday and how sleep will be useful for that; neither seems to be immediately useful. If I reorganize my entire day around motivating an early bedtime, that often works, but at an unacceptably high cost; the point of going to bed early is to have more surplus time/energy, not to spend all of my time/energy on going to bed.

I am happy to test various hypotheses, but don't have a good sense of which hypotheses to promote or how to generate plausible hypotheses in this context.

Comment author: Nick_Tarleton 04 April 2010 06:26:51PM *  2 points [-]

Melatonin. Also, getting my housemates to harass me if I don't go to bed.

Comment author: gwern 07 April 2010 09:30:34PM 1 point [-]

Mass_Driver's comment is kind of funny to me, since I had addressed exactly his issue at length in my article.

Comment author: Mass_Driver 08 April 2010 03:25:39PM *  1 point [-]

Which, I couldn't help but notice, you have thoughtfully linked to in your comment. I'm new here; I haven't found that article yet.

Comment author: gwern 08 April 2010 04:38:49PM *  3 points [-]

If you're not being sarcastic, you're welcome.

If you're being sarcastic, my article is linked, in Nick_Tarleton's very first sentence; it would be odd for me to simply say 'my article' unless some referent had been defined in the previous two comments, and there is only one hyperlink in those two comments.

Comment author: Amanojack 04 April 2010 05:53:31PM *  1 point [-]

I've been struggling with this for years, and the only thing I've found that works when nothing else does is hard exercise. The other two things that I've found help the most:

  • Let the sun hit your eyelids first thing in the morning (to halt melatonin production)
  • F.lux, a program that auto-adjusts your monitor's light levels (and keep your room lights low at night; otherwise melatonin production will be delayed)

EDIT: Apparently keeping your room lights at a low color temperature (incandescent/halogen instead of fluorescent) is better than keeping them at low intensity:

"...we surmise that the effect of color temperature is greater than that of illuminance in an ordinary residential bedroom or similar environment where a lowering of physiological activity is desirable, and we therefore find the use of low color temperature illumination more important than the reduction of illuminance. Subjective drowsiness results also indicate that reduction of illuminance without reduction of color temperature should be avoided." —Noguchi and Sakaguchi, 1999 (note that these are commercial researchers at Matsushita, which makes low-color-temperature fluorescents)

Comment author: Mass_Driver 05 April 2010 01:44:16PM *  1 point [-]

That all sounds awfully biological -- are you sure fixing monitor light levels is a solution for akrasia?

Comment author: RobinZ 04 April 2010 01:31:25PM *  1 point [-]

What do you do instead of going to bed? I notice myself spending time on the Internet.

Comment author: MatthewB 05 April 2010 03:21:55AM 1 point [-]

Either that or painting (The latter is harder to do because the cats tend to want to help me paint, yet don't get the necessity of oppose-able thumbs ... umm...Opposeable? Opposable??? anyway....)

Since I have had sleep disorders since I was 14, I've got lots of practice at not sleeping (pity there was no internet then)... So, I either read, draw, paint, sculpt, or harass people on the opposite side of the earth who are all wide awake.

Comment author: alyssavance 02 April 2010 04:17:42AM *  2 points [-]

"Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea."

Attention everyone: This post is currently broken for some unknown reason. Please use the new post at http://lesswrong.com/lw/212/announcing_the_less_wrong_subreddit_2/ if you want to discuss the sub-Reddit. The address of the sub-Reddit is http://www.reddit.com/r/LessWrong

Comment author: Peter_Twieg 01 April 2010 05:33:15PM *  2 points [-]

I recently got into some arguments with foodies I know on the merits (or lack thereof) of organic / local / free-range / etc. food, and this is a topic where I find it very difficult to find sources of information that I trust as reflective of some sort of expert consensus (insofar as one can be said to exist.) Does anyone have any recommendations for books or articles on nutrition/health that holds up under critical scrutiny? I trust a lot of you as filters on these issues.

Comment author: Yvain 11 April 2010 01:47:55PM *  5 points [-]

There are lots of studies on the issue, and as usual most of them are bad and disagree with each other.

I tend to trust the one by the UK Food Standards Association because it's big and government-funded. Mayo Clinic agrees. I think there are a few studies that show organic foods do have lower pesticide levels than normal, but nothing showing that it actually leads to health benefits. Pesticides can cause some health problems in farmers, but they're receiving a bajillion times the dose of someone who just eats the occasional carrot. And some "organic pesticides" are just as bad as any synthetic ones. There's also a higher risk of getting bacterial infections from organic food.

Tastewise, a lot of organics people cite some studies showing that organic apples and other fruit taste better than conventional - I can't find the originals of these and there are equally questionable studies that say the opposite. Organic vegetables taste somewhere between the same and worse, even by organic peoples' admission. There's a pretty believable study showing conventional chicken tastes better than organic, and a more pop-sci study claiming the same thing about almost everything. I've seen some evidence that locally grown produce tastes better than imported, but that's a different issue than organic vs. non-organic and you have to make sure people aren't conflating them.

They do produce less environmental damage per unit land, but they produce much less food per unit land and so require more land to be devoted to agriculture. How exactly that works out in the end is complex economics that I can't navigate.

My current belief is that organics have a few more nutrients here and there but not enough to matter, are probably less healthy overall when you consider infection risk, and taste is anywhere from no difference to worse except maybe on a few limited fruits.

Comment author: taw 02 April 2010 01:21:01PM 0 points [-]

The famous metaanalyses which has shown that vitamin supplementation is essentially useless, or possibly even harmful totally destroys the basic argument ("oh look, more vitamins!" - not that it's usually even true) that organic is good for your health.

It might still be tastier. Or not.

Comment author: aleksiL 11 April 2010 06:34:38AM 1 point [-]

Do you mean these metaanalyses?

Comment author: taw 11 April 2010 06:36:53PM 1 point [-]

Yes. Even if PhilGoetz is correct that harmfulness was an artifact, there's still essentially zero evidence for benefits of eating more vitamins than RDA.

Comment author: RobinZ 07 April 2010 01:08:49AM 1 point [-]

Arithmetic, Population, and Energy by Dr. Albert A. Bartlett, Youtube playlist. Part One. 8 parts, ~75 minutes.

Relatively trivial, but eloquent: Dr. Bartlett describes some properties of exponential functions and their policy implications when there are ultimate limiting factors. Most obvious policy implication: population growth will be disastrous unless halted.

Comment author: Strange7 07 April 2010 01:18:30AM 4 points [-]

People have been worrying about that one since Malthus. Turns out, production capacity can increase exponentially too, and when any given child has a high enough chance of survival, the strategy shifts from spamming lots of low-investment kids (for farm labor) to having one or two children and lavishing resources on them, which is why birthrates in the developed world are dropping below replacement.

Comment author: RobinZ 07 April 2010 02:08:17AM *  1 point [-]

Simple thermodynamics guarantees that any growing consumption of resources is unsustainable on a long enough timescale - even if you dispute the implicit timescale in Dr. Bartlett's talk*, at some point planning will need to account for the fundamental limits. Ignoring the physics is a common error in economics (even professional economics, depressingly).

* Which you appear not to have watched through - for shame!

Comment author: Strange7 07 April 2010 06:24:09PM 3 points [-]

Yes, obviously thermodynamics limits exponential growth. I'm saying that exponential growth won't continue indefinitely, that people (unlike bugs) can, will, and in fact have already begun to voluntarily curtail their reproduction.

Comment author: Jack 07 April 2010 06:32:11PM 2 points [-]

What kind of reproductive memes do you think get selected for?

Comment author: RobinZ 07 April 2010 06:41:19PM 1 point [-]

How strong is the penalty for defection?

Comment author: Jack 07 April 2010 07:43:41PM 2 points [-]

Yeah, this obviously matters a lot. Right now low to non-existent outside the People's Republic of China, though I suppose that could change. There are a lot of barriers to effective enforcement of reproductive prohibitions: incredibly difficult to solve cooperation issues, organized religions, assorted rights and freedoms people are used to. I suppose a sufficiently strong centralized power could solve the problem though such a power could be bad for other reasons. My sense is the prospects for reliable enforcement are low but obviously a singularity type superintelligence could change things.

Comment author: bogdanb 08 April 2010 08:30:54AM 2 points [-]

I’m not quite sure that penalties are that low outside China.

There are of course places where penalties for many babies are low, and there are even states that encourage having babies — but the latter is because birth rates are below replacement, so outside of our exponential growth discussion; I’m not sure about the former, but the obvious cases (very poor countries) are in the malthusian scenario already due to high death rates.

But in (relatively) rich economies there are non-obvious implicit limits to reproduction: you’re generally supposed to provide a minimum of care to children; even more, that “minimum” tends to grow with the richness of the economy. I’m not talking only about legal minimum, but social ones: children in rich societies “need” mobile phones and designer clothes, adolescents “need” cars, etc.

So having children tends to become more expensive in richer societies, even absent explicit legal limits like in China, at least in wide swaths of those societies. (This is a personal observation, not a proof. Exceptions exist. YMMV. “Satisfaction guaranteed” is not a guarantee.)

Comment author: Jack 08 April 2010 04:12:05PM 3 points [-]

The legal minimum care requirement is a good point. With the social minimum: I recognize that this meme exists but it doesn't seem like there are very high costs to disobeying it. If I'm part of a religion with an anti-materialist streak and those in my religious community aren't buying their children designer clothes either... I can't think of what kind of penalty would ensue (whereas not bathing or feeding your children has all sorts of costs if an outsider finds out). It seems better to think of this as a meme which competes with "Reproduce a lot" for resources rather than as a penalty for defection.

Your observation is a good one though.

Comment author: AngryParsley 01 April 2010 03:58:00PM 1 point [-]

Sam Harris gave a TED talk a couple months ago, but I haven't seen it linked here. The title is Science can answer moral questions.

Comment author: cupholder 01 April 2010 06:05:37PM *  3 points [-]

Harris has also written a blog post nominally responding to 'many of my [Harris'] critics' of his talk, but it seems to be more of a reply to Sean Carroll's criticism of Harris' talk (going by this tweet and the many references to Carroll in Harris' post). Carroll has also briefly responded to Harris' response.

Comment author: taw 02 April 2010 01:10:01PM 4 points [-]

It was so filled with wrong I couldn't even bother to finish it, and I usually enjoy crackpots from TED.

Comment author: Vladimir_Nesov 01 April 2010 04:34:48PM 2 points [-]

He discusses that science can answer factual questions, thus resolving uncertainty in moral dogma defined conditionally on those answers. This is different from figuring out moral questions themselves.

Comment author: Jack 02 April 2010 02:51:32PM 2 points [-]

That isn't all he is claiming though:

I was not suggesting that science can give us an evolutionary or neurobiological account of what people do in the name of “morality.” Nor was I merely saying that science can help us get what we want out of life. Both of these would have been quite banal claims to make (unless one happens to doubt the truth of evolution or the mind’s dependency on the brain). Rather I was suggesting that science can, in principle, help us understand what we should do and should want—and, perforce, what other people should do and want in order to live the best lives possible. My claim is that there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics, and such answers may one day fall within reach of the maturing sciences of mind

Comment author: timtyler 02 April 2010 11:59:50AM 1 point [-]

My reaction was: bad talk, wrong answers, not properly thought through.

Comment author: Liron 02 April 2010 12:15:27AM 1 point [-]

I'm always impressed by Harris's eloquence and clarity of thought.

Comment author: alexflint 13 April 2010 11:00:38PM *  1 point [-]

Having read the quantum physics sequence I am interested in simulating particles at the level of quantum mechanics (for my own experimentation and education). While the sequence didn't go into much technical detail, it seems that the state of a quantum system comprises an amplitude distribution in configuration space for each type of particle, and that the dynamics of the system are governed by the Shroedinger equation. The usual way to simulate something like this would be to approximate the particle fields as piecewise linear and update iteratively according to the Shroedinger equation. Some questions:

  • Does anyone have a good source for the technical background I will need to implement such a simulation? Specifically more technical details of the Shroedinger equation (the wikipedia article is unhelpful)

  • I imagine this will quickly become intractable quite as I try to simulate more complex systems with more particles. How quickly, though? Could I simulate, e.g., the interaction of two H_2 ions in a reasonable time (say, no more than a few hours)?

  • Surely others have tried this. Any links/references would be much appreciated.

Comment author: NancyLebovitz 05 April 2010 10:57:29PM 1 point [-]

An extensive observation-based discussion of why people leave cults Worth reading, not just for the details, but because it's made very clear that leaving has to make emotional sense to the person doing it. Logical argument is not enough!

People leave because they've been betrayed by leaders, they've been influenced by leaders who are on their own way out of the cult, they find the world is bigger and better than the cult has been telling them, the fears which drove a person into a cult get resolved, and /or life changes which show that the cult isn't working for them.

Comment author: Amanojack 05 April 2010 10:50:20PM 1 point [-]

I've become a connoisseur of hard paradoxes and riddles, because I've found that resolving them always teaches me something new about rationalism. Here's the toughest beast I've yet encountered, not as an exercise for solving but as an illustration of just how much brutal trickiness can be hidden in a simple-looking situation, especially when semantics, human knowledge, and time structure are at play (which happens to be the case with many common LW discussions).

A teacher announces that there will be a surprise test next week. A student objects that this is impossible: "The class meets on Monday, Wednesday, and Friday. If the test is given on Friday, then on Thursday I would be able to predict that the test is on Friday. It would not be a surprise. Can the test be given on Wednesday? No, because on Tuesday I would know that the test will not be on Friday (thanks to the previous reasoning) and know that the test was not on Monday (thanks to memory). Therefore, on Tuesday I could foresee that the test will be on Wednesday. A test on Wednesday would not be a surprise. Could the surprise test be on Monday? On Sunday, the previous two eliminations would be available to me. Consequently, I would know that the test must be on Monday. So a Monday test would also fail to be a surprise. Therefore, it is impossible for there to be a surprise test.”

Can the teacher fulfill his announcement?

Extensive treatment and relation to other epistemic paradoxes here.

Comment author: thomblake 08 April 2010 04:23:14PM 3 points [-]

Let's not forget that the clever student will be indeed very surprised by a test on any day, since he thinks he's proven that he won't be surprised by tests on those days. It seems he made an error in formalizing 'surprise'.

(imagine how surprised he'll be if the test is on Friday!)

Comment author: Rain 08 April 2010 04:18:15PM 1 point [-]

Why not give a test on Monday, and then give another test later that day? I bet they would be surprised by a second test on the same day.

Comment author: gaffa 05 April 2010 01:51:43PM 1 point [-]

Does anyone know a popular science book about, how should I put it, statistical patterns and distributions in the universe. Like, what kind of things follow normal distributions and why, why do power laws emerge everywhere, why scale-free networks all over the place, etc. etc.

Comment author: DanielVarga 08 April 2010 10:05:48PM *  9 points [-]

Sorry for ranting instead of answering your question, but "power laws emerge everywhere" is mostly bullshit. Power laws are less ubiquitous than some experts want you to believe. And when you do see them, the underlying mechanisms are much more diverse than what these experts will suggest. They have an agenda: they want you to believe that they can solve your (biology, sociology, epidemiology, computer networks etc.) problem with their statistical mechanics toolbox. Usually they can't.

For some counterbalance, see Cosma Shalizi's work. He has many amusing rants, and a very good paper:

Gauss Is Not Mocked

So You Think You Have a Power Law — Well Isn't That Special?

Speaking Truth to Power About Weblogs, or, How Not to Draw a Straight Line

Power-law distributions in empirical data

Note that this is not a one-man crusade by Shalizi. Many experts of the fields invaded by power-law-wielding statistical physicists wrote debunking papers such as this:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.8169

Another very relevant and readable paper:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.6305

Comment author: RobinZ 08 April 2010 10:41:18PM 4 points [-]

That gives a whole new meaning to Mar's Law.

Comment author: DanielVarga 08 April 2010 11:33:01PM *  2 points [-]

Thank you, I never knew this fallacy has its own name, and I have been annoyed by it since ages. Actually, since 2003, when I was working on one of the first online social network services (iwiw.hu). The structure of the network was contradicting most of the claims made by the then-famous popular science books on networks. Not scale-free, (not even truncated power-law), not attack-sensitive, most of the edges were strong links. Looking at the claims of the original papers instead of the popular science books, the situation was not much better.

Comment author: Cyan 05 April 2010 05:36:01PM 1 point [-]

You could try "Ubiquity" by Mark Buchanan for the power law stuff, but it's been a while since I read it, so I can't vouch for it completely. (Confusingly, Amazon lists three books with that title and different subtitles, all by that author, all published around 2001-2002.)

Comment author: wheninrome15 02 April 2010 12:53:16AM 1 point [-]

Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces? If friendly AI is in fact not possible, then first generation AI may recognize this fact and not want to build a successor that would destroy the first generation AI in an act of unfriendliness.

It seems to me like the worst case would be that Friendly AI is in fact possible...but that we aren't the first to discover it. In which case AI would happily perpetuate itself. But what are the best and worst case scenarios conditioning on Friendly AI being IMpossible?

Has this been addressed before? As a disclaimer, I haven't thought much about this and I suspect that I'm dressing up the problem in a way that sounds different to me only because I don't fully understand the implications.

Comment author: PhilGoetz 02 April 2010 02:14:20AM 1 point [-]

Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces?

First, define "friendly" in enough detail that I know that it's different from "will not blow up in our faces".

Comment author: RobinZ 02 April 2010 01:03:28AM 1 point [-]

Such an eventuality would seem to require that (a) human beings are not computable or (b) human beings are not Friendly.

In the latter case, if nothing else, there is [individual]-Friendliness to consider.

Comment author: Kevin 02 April 2010 01:16:51AM *  2 points [-]

I think human history has demonstrated that (b) is certainly true... sometimes I am surprised we are still here.

Comment author: RobinZ 02 April 2010 01:58:12AM 2 points [-]

The argument from (b)* is one of the stronger ones I've heard against FAI.

* Not to be confused with the argument from /b/.

Comment author: ata 02 April 2010 10:59:56AM 1 point [-]

Incidentally, /b/ might be good evidence for (b). It's a rather unsettling demonstration of what people do when anonymity has removed most of the incentive for signaling.

Comment author: taw 02 April 2010 01:23:24PM 2 points [-]

I find chans' lack of signaling highly intellectually refreshing. /b/ is not typical - due to ridiculously high traffic only meme-infested threads that you can reply to in 5 seconds survive. Normal boards have far better discussion quality.