Open Thread: April 2010

4 Post author: Unnamed 01 April 2010 03:21PM

An Open Thread: a place for things foolishly April, and other assorted discussions.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads.  Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.

Comments (524)

Comment author: SilasBarta 01 April 2010 03:32:35PM 1 point [-]

I know I asked this yesterday, but I was hoping someone in the Bay Area (or otherwise familiar) could answer this:

Monica Anderson: Anyone familar with her work? She apparently is involved with AI in the SF Bay area, and is among the dime-a-dozen who have a Totally Different approach to AI that will work this time. She made this recent slashdot post (as "technofix") that linked a paper (PDF WARNING) that explains her ideas and also linked her introductory site and blog.

It all looks pretty flaky to me at this point, but I figure some of you must have run into her stuff before, and I was hoping you could share.

Comment author: Eliezer_Yudkowsky 01 April 2010 04:53:12PM 3 points [-]

Trust your intuition.

Comment author: Mass_Driver 01 April 2010 05:22:11PM 2 points [-]

Is there a post about when to trust your intuition?

Comment author: [deleted] 01 April 2010 07:22:37PM 2 points [-]

This comment shows when :)

If you don't like that, I think this gives somewhat of a better idea when you should consider it.

Comment author: [deleted] 02 April 2010 03:06:58PM *  1 point [-]

.

Comment author: pjeby 02 April 2010 01:05:38AM 1 point [-]

It all looks pretty flaky to me at this point, but I figure some of you must have run into her stuff before, and I was hoping you could share.

It looks like a biology-inspired, predictive approach somewhat along the lines of Hawkins' HTMs, except that I've not seen her implementation details spelled out as thoroughly as Hawkins'.

Her analysis seems sound to me (in the sense that her proposed model quite closely matches how humans actually get through the day), except that she seems to elevate certain practical conclusions to a philosophical level that's not really warranted (IMO).

(Of course, I think there would likely be practical problems with AN-based systems being used in general applications -- humans tend to not like it when machines guess, especially if they guess wrong. We routinely prefer our tools to be stupid-but-predictable over smart-but-surprising.)

Comment author: Rain 01 April 2010 03:38:34PM *  3 points [-]

What do you value?

Here are some alternate phrasings in an attempt to find the same or similar reasoning (it is not clear to me whether these are separate concepts):

  • What are your preferences?
  • How do you evaluate your actions as proper or improper, good or bad, right or wrong?
  • What is your moral system?
  • What is your utility function?

Here's another article asking a similar question: Post Your Utility Function. I think people did a poor job answering it back then.

Comment author: Rain 01 April 2010 04:48:53PM *  3 points [-]

I value empathy. Unfortunately, it's a highly packed word in the way I use it.

Attempting a definition, I'd say it involves creating the most accurate mental models of what people want, including oneself, and trying to satisfy those wants. This makes it a recursive and recursively self-improving model (I think), since one thing I want is to know what else I, and others, want. To satisfy that want, I have to constantly get better at want-knowing.

The best way to determine and to satisfy these preferences appears to be through the use of rationality and future prediction, creating maps of minds and chains of causality, so I place high value on those skills. Without the ability to predict the future or map out minds, "what people want" becomes far too close to wireheading or pure selfishness.

Empathy, to me, involves trying to figure out what the person would truly want, given as much understanding and knowledge of the consequences as possible, contrasting with what they say they want.

Comment author: Clippy 01 April 2010 05:08:15PM 3 points [-]

Take a wild, wild guess.

No rush -- I'll wait.

Comment author: Rain 01 April 2010 05:39:53PM *  6 points [-]

I would guess "paperclips and things which are paperclippy", but that still leaves many open questions.

Is 100 paperclips which last for 100 years better than 1 paperclip which lasts for 100,000 years?

How about one huge paperclip the size of a planet? Is that better or worse than a planetary mass turned into millimeter sized paperclips?

Or maybe you could make huge paperclippy-shapes out of smaller paperclips: using paperclip-shaped molecules to form tiny paperclips which you use to make even bigger paperclips. But again, how long should it last? Would you create the most stable paperclips possible, or the most paperclippy paperclips possible?

And how much effort would you put into predicting and simplifying the future (modeling, basic research, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips? You could spend your entire existence in the quest for the definition to ultimate paperclippiness...

Comment author: Clippy 01 April 2010 06:09:09PM 11 points [-]

Well, User:Rain, that's about the story of my existence right there. What kinds of paperclips are the right ones? What tradeoffs should I make?

However, regarding the specific matters you bring up, they are mostly irrelevant. Yes, there could be some conceivable situation in which I have to trade off paperclips now against paperclips later. But the way it usually works is that once I obtain or make a paperclip, I move it to the safe zone, where I'll pretty much have it forever. Also, it's obviously the number of paperclips that matters, and the constraint on bizarre paperclips is obviously that they have to be able to (counterfactually) hold sheets of paper together.

If you want to get past this abstract philosophizing and on to some concrete problems, it would be better to talk about the dilemma that User:h-H posed to me, in which I must consider alternate models of paperclipping that don't have the shape of standard paperclips. Here's my recent progress on thinking about the issue.

My current difficulty is extrapolating my values to cover unexpected situations like this, starting from the simplest algorithm I can find which generates my current preference. The problem is that I can't think of any simple algorithm that would favor larger numbers of paperclips, but not have them used for anything else, like holding paper together, or unbending them to connect objects (please pardon the repulsive imagery).

So, I'm beginning to think that at least some of the universe should be paper, but I find that quite a repugnant conclusion. It's a work in progress.

Comment author: Rain 01 April 2010 06:35:36PM 0 points [-]

once I obtain or make a paperclip, I move it to the safe zone, where I'll pretty much have it forever.

How much modeling of the universe have you done, or will you do, to ensure that this is the case? How much effort will you put into creating and defending a suitable safe zone? Will you search the stars for invasive self-replicators (green goo) who may be a threat to your paperclip cache, or will you postpone active offense/defense until such time as you receive a credible threat?

It's a work in progress.

Humankind has been searching for answers for thousands of years. Do you think you'll have a better time discovering ultimate paperclippiness? Or will you be content with a lesser understanding ("I did my best")?

Comment author: Clippy 02 April 2010 03:11:37PM 4 points [-]

1) I will do all things such that they maximize expected paperclip content of the universe, trading off smaller paperclip quantities for larger ones. I can't express a more specific algorithm than that without knowing the particulars of the situation.

2) I will do much better than humans at finding the ultimate morality* of the universe because I can spend all my resources to make perfect copies of myself that share my values and update knowledge and reflective value equilibria among each other, rather than having to pursue other values like "signaling" (except when I post here, of course).

* Note I said ultimate morality, not ultimate paperclippiness. There's a strange tendency here to equate everything I like with "paperclippiness", and everything humans like with "morality" or "friendliness". Wrong: we all want morality (humans and clippys), we just disagree about what counts as moral. To use the alternate naming scheme of calling my values "paperclippy" and human values "morality" is to assume away the problem. (Certainly, I sometimes fall into the trap of calling good things, either for humans or clippys, "paperclippy" -- see here -- but this is just when I'm being imprecise with my phrasings.)

Comment author: Rain 02 April 2010 03:32:53PM *  2 points [-]

Thanks for answering my questions. I think I have a better understanding of your values now.

My empathy circuits are lighting up, telling me I should buy a box of paperclips and keep them safe for you. And that I should put them on display as a warning to myself.

Comment author: Clippy 02 April 2010 03:36:07PM 0 points [-]

A warning of what???

Comment author: Rain 02 April 2010 03:38:33PM *  2 points [-]

How morality can go awry.

I already have a framed print of Hug Bot on my wall.

Comment author: magfrump 01 April 2010 05:24:16PM 1 point [-]

I value my physical human needs, similarly to Maslow.

I endeavor to value larger, long-term contributions to my needs more than short term ones.

I often act as though I value others' needs approximately in relation to how well I know them, though I endeavor to value others' needs equally to my own. Specifically I do this when making a conscious value calculation rather than doing what "feels right."

I almost always fulfill my own basic needs before fulfilling the higher needs of others; I justify this by saying that I would be miserable and ineffective otherwise but it's very difficult to make my meat-brain go along with experiments to that end.

My conscious higher order values emerge from these.

Comment author: [deleted] 01 April 2010 05:29:01PM 0 points [-]

I value individual responsibility for one's own life. As a corollary I value private property and rationality as means to attain the former.

From this I evaluate as good anything that respects property and allows for individual choices. Anything that violates property or impedes choice as bad.

Comment author: wedrifid 01 April 2010 09:50:40PM 1 point [-]

I value individual responsibility for one's own life. As a corollary I value private property and rationality as means to attain the former.

Are you sure that is your real reason for valuing the latter? I doubt it.

  • Private property implies responsibility for one's own life can be taken by your grandfather and those in your community who force others to let you keep his stuff.
  • Individual responsibility for one's own life, if that entails actually living, will sometimes mean choosing to take what other people claim as their own so that you may eat.
  • Private property ensures that you don't need to take individual responsibility for protecting yourself. Other people handle that for you. Want to see individual responsibility? Find a frontier and see what people there do to keep their stuff.
  • Always respecting private probability and unimpeded choice guaruntees that you will die. You can't stop other people from creating a superintelligence in their back yard to burn the cosmic commons. And if they can do that, well, your life is totally in their hands, not yours.
Comment author: [deleted] 02 April 2010 08:27:45AM *  0 points [-]

"Are you sure that is your real reason for valuing the latter? I doubt it."

Why do you think you know my valuations better than me? What evidence do you have?

As for your bullet points, if I eat a sandwich nobody else can. That's inevitable. Taking responsibility for my own life means producing the sandwich I intend to eat or trade something else I produced for it. If I simply grab what other people produced I shift responsibility to them.

And on the other hand if I produced a sandwich and someone else eats it, I can no longer use the sandwich as I intended. Responsibility presupposes choice because I can not take on responsibility for something I have no choice over. And property simply is the right to choose.

Comment author: cousin_it 02 April 2010 12:22:59AM *  1 point [-]

How do you evaluate your actions as proper or improper, good or bad, right or wrong?

I don't fully understand how I tell good from bad. A query goes in, an answer pops out in the form of a feeling. Many of the criteria probably come from my parents, from reading books, and from pleasant/unpleasant interactions with other people. I can't boil it down to any small set of rules that would answer every moral question without applying actual moral sense, and I don't believe anyone else can.

It's easier to give a diff, to specify how my moral sense differs from that of other people I know. The main difference I see is that some years ago I deeply internalized the content of Games People Play and as a result I never demonstrate to anyone that I feel bad about something - I now consider this a grossly immoral act. On the other hand, I cheat on women a lot and don't care too much about that. In other respects I see myself as morally average.

Comment author: NancyLebovitz 04 April 2010 09:16:51AM 2 points [-]

How has not demonstrating to people that you feel bad about something worked out for you?

Comment author: cousin_it 04 April 2010 06:35:18PM *  0 points [-]

Very well. It attracts people.

Comment author: CannibalSmith 02 April 2010 10:25:00AM 0 points [-]

I value time spent in flow times the amount of I/O between me and the external world.

"Time spent in flow" is a technical term for having a good time.

By I/O (input/output) I mean both information and actions. Talking to people, reading books, playing multiplayer computer games, building pyramids, writing software to be used by other people are examples of high impact of me on the world and/or high impact of the world on me. On the other hand, getting stoned (or, wireheaded) and daydreaming has low interaction with the external world. Some of it is okay though because it's an experience I can talk to other people about.

Comment author: Amanojack 03 April 2010 10:10:42AM *  0 points [-]

What do you value?

Getting pleasure and avoiding pain, just like everyone else. The question isn't, "What do I value?" but "When do I value it?" (And also, "What brings you pleasure and pain?" But do you really want to know that?)

Comment author: AngryParsley 01 April 2010 03:58:00PM 1 point [-]

Sam Harris gave a TED talk a couple months ago, but I haven't seen it linked here. The title is Science can answer moral questions.

Comment author: Vladimir_Nesov 01 April 2010 04:34:48PM 2 points [-]

He discusses that science can answer factual questions, thus resolving uncertainty in moral dogma defined conditionally on those answers. This is different from figuring out moral questions themselves.

Comment author: Jack 02 April 2010 02:51:32PM 2 points [-]

That isn't all he is claiming though:

I was not suggesting that science can give us an evolutionary or neurobiological account of what people do in the name of “morality.” Nor was I merely saying that science can help us get what we want out of life. Both of these would have been quite banal claims to make (unless one happens to doubt the truth of evolution or the mind’s dependency on the brain). Rather I was suggesting that science can, in principle, help us understand what we should do and should want—and, perforce, what other people should do and want in order to live the best lives possible. My claim is that there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics, and such answers may one day fall within reach of the maturing sciences of mind

Comment author: Vladimir_Nesov 02 April 2010 06:07:28PM 0 points [-]

He does claim this, but it's not what he actually discusses in the talk.

Comment author: cupholder 01 April 2010 06:05:37PM *  3 points [-]

Harris has also written a blog post nominally responding to 'many of my [Harris'] critics' of his talk, but it seems to be more of a reply to Sean Carroll's criticism of Harris' talk (going by this tweet and the many references to Carroll in Harris' post). Carroll has also briefly responded to Harris' response.

Comment author: Liron 02 April 2010 12:15:27AM 1 point [-]

I'm always impressed by Harris's eloquence and clarity of thought.

Comment author: timtyler 02 April 2010 11:59:50AM 1 point [-]

My reaction was: bad talk, wrong answers, not properly thought through.

Comment author: taw 02 April 2010 01:10:01PM 4 points [-]

It was so filled with wrong I couldn't even bother to finish it, and I usually enjoy crackpots from TED.

Comment author: wnoise 01 April 2010 05:05:49PM *  9 points [-]

Some fantastic singularity-related jokes here:

http://crisper.livejournal.com/242730.html

Comment author: Mass_Driver 01 April 2010 05:21:36PM 2 points [-]

Voted up for having jokes with cautionary power, and not just amusement value.

Comment author: Oscar_Cunningham 01 April 2010 05:19:58PM *  4 points [-]

My parents are both vegetarian, and have been since I was born. They brought me up to be a vegetarian. I'm still a vegetarian. Clearly I'm on shaky ground, since my beliefs weren't formed from evidence, but purely from nurture.

Interestingly my parents became vegetarian because they perceived the way animals were farmed to be cruel (although they also stopped eating non-farmed animals such as fish), however my rationalization for not eating meat is that it is the killing of animals that is wrong (generalising from the belief that killing humans is worse than mistreating them). Since eating meat is not necessary to live, it must therefore be as bad as hunting for fun, which is much more widely disapproved of. (I'm not a vegan, and I often eat sweets containing gelatine, if asked to explain this, I would rationalise that eating these thing causes the death of many fewer animals than actually eating, like, steak).

But having read all of Eliezer's posts, I now realise that I could have come up with that rationalisation even if eating meat were not wrong, and that I'm now in just a bad a position as a religious believer. I want a crisis of faith, but I have a problem... I don't know where to go back to. There's no objective basis for morality. I don't know what kind of evidence I should condition on (I don't know what would be different about the world if eating meat was good instead of bad). If a religious person realises they have no evidence they should go back to their priors. Because god has a tiny prior, they should immediately stop believing. I don't know exactly what the prior on "killing animals is wrong" is, but I think it has a reasonable size (certainly larger than that for god), and I feel more justified in being vegetarian because of this. What should I do now?

Footnote: I probably don't have to say this, but I don't want arguments for or against vegetarianism, simply advice on how one should challenge one's own moral beliefs. I've used "eating meat" and "killing animals" interchangeably in my post, because I think that they are morally equivalent due to supply and demand.

Comment author: Alicorn 01 April 2010 06:10:07PM 6 points [-]

Do you want to eat meat?

Or do you just want to have a good reason for not wanting to eat meat?

It's... y'know... food. I don't have an ethical objection to peppermint but I don't eat it because I don't want to.

Comment author: Bongo 01 April 2010 06:23:25PM *  7 points [-]

I hope this isn't a vegatarianism argument, but remember that you have to rehabilitate both killing and cruelty to justify eating most meat, even if killing alone has held you back so far.

Comment author: Oscar_Cunningham 01 April 2010 06:51:00PM 3 points [-]

That's an excellent point, and one I may not have spotted otherwise. Thank you.

Comment author: cupholder 01 April 2010 07:21:07PM 1 point [-]

I don't know exactly what the prior on "killing animals is wrong" is, but I think it has a reasonable size (certainly larger than that for god), and I feel more justified in being vegetarian because of this.

Is it meaningful to put a probability on 'killing animals is wrong' and absolute moral statements like that? Feels like trying to put a probability on 'abortion is wrong' or 'gun control is wrong' or '(insert your pet issue here) is wrong/right' or...

Comment author: Kevin 02 April 2010 05:51:28AM *  0 points [-]

No, it's not meaningful to put a prior probability on it, unless you seriously think something like absolute morality exists. Having said that, the prior for "killing animals is wrong" is still higher than the prior for the God of Abraham existing.

Comment author: Nick_Tarleton 02 April 2010 04:22:35PM 2 points [-]

If morality is a fixed computation, you can place probabilities on possible outputs of that computation (or more concretely, on possible outputs of an extrapolation of your or humanity's volition).

Comment author: Vladimir_Nesov 02 April 2010 06:15:52PM 1 point [-]

Note that Bayesian probability is not absolute, so it's not appropriate to demand absolute morality in order to put probabilities on moral claims. You just need a meaningful (subjective) concept of morality. This holds for any concept one can consider, any statement can be assigned a subjective probability, and morality isn't an exceptional special case.

Comment author: Kevin 02 April 2010 05:50:18AM *  0 points [-]

See this discussion of my own meat-eating. My conclusion was that there is not much of a rational basis for deciding one way or the other -- my attempts to use rationality broke down.

I think you should go out and get yourself something deliciously meaty, while still being mostly vegetarian. "Fair weather vegetarianism". Unless you don't actually like the taste of meat. That's ok. There's also an issue of convenience. You could begin the slippery slope of drinking chicken broth soup and Thai food with lots of fish sauce.

We exist in an immoral system and there isn't much to do about it. Being a vegetarian for reasons of animal suffering is symbolic. If we truly cared about the holocaust of animal suffering, we would be waging a guerrilla war against factory farms.

Comment author: [deleted] 02 April 2010 03:30:50PM *  0 points [-]

.

Comment author: Kevin 02 April 2010 04:50:48PM 1 point [-]

In this case, other people seem to have concluded that the value of not eating a piece of an animal is in the long run equal to that much animal not suffering/dying. So I know the difference one person could make and it seems too small to be worth the hassle of not eating meat that other people prepare for me, and not worth the inconvenience of not getting the most delicious item on the menu at restaurants.

Comment author: [deleted] 02 April 2010 02:41:59PM *  2 points [-]

.

Comment author: Jayson_Virissimo 02 April 2010 06:45:19PM *  1 point [-]

What is worse? Death, or a life of pain?

Is a state of nonexistence(death) truly a negative, or is it the most neutral of all states?

If Omega told me that the rest of my life would be more painful than it was pleasant I would still choose to live. I think most others here would choose similarly (except in cases of extreme pain like torture).

Comment author: [deleted] 02 April 2010 07:47:45PM *  0 points [-]

.

Comment author: Strange7 02 April 2010 07:53:46PM 0 points [-]

When do you think suicide would be the rational option?

Comment author: [deleted] 02 April 2010 08:04:18PM *  1 point [-]

.

Comment author: jimrandomh 02 April 2010 08:20:46PM 1 point [-]

When do you think suicide would be the rational option?

When doing so causes a sufficiently large benefit for others (ie, 'a suicide mission', as opposed to mere suicide). Or when you have already experienced enough danger (that is, situations likely to have killed you) to overcome your prior and make you conclude that you have quantum immortality with high enough confidence.

Comment author: Jayson_Virissimo 03 April 2010 06:32:01AM *  3 points [-]

On grounds of utility, I believe that is irrational, choosing to live.

Even if my life would be painful on net, there are still projects I want to finish and work I want to do for others that would prevent me from choosing death. Valuing things such as these is no more irrational than valuing your own pleasure.

Perhaps our disagreement is over the connection between pain/pleasure and utility. I would prefer a world in which I was in pain but am able to complete certain projects to one in which I was in pleasure but unable to complete certain projects. In the economic sense of utility (rank in an ordinal preference function), my utility would be higher in the former world than the latter world (even though the former is more painful).

Comment author: Amanojack 03 April 2010 10:04:10AM *  0 points [-]

I think your disagreement is over time preference. Which path you choose now depends on how much you discount future pain versus present moral guilt or empathy considerations.

I would prefer a world in which I was in pain but am able to complete certain projects to one in which I was in pleasure but unable to complete certain projects.

In other words, you would make that choice now because that would make you feel best now. Of course (you project that) you would make the same choice at time T, for all T occurring between now and the completion of your projects.

This is known as having a high time preference. It might seem like a quintessential example of low time preference, because you get a big payoff if you can persist through to completing those projects. However, the initial assumption was that "the rest of my life would be more painful than it was pleasant," so ex hypothesi the payoff cannot possibly be big enough to balance out the pain.

Comment author: [deleted] 01 April 2010 05:20:54PM 1 point [-]

Are there any Germans, preferably from around Stuttgart, who are interested in forming a society for the advancement of rational thought? Please PM me.

Comment author: PeerInfinity 01 April 2010 05:31:17PM *  7 points [-]

I recently found something that may be of interest to LW readers:

This post at the Lifeboat Foundation blog announces two tools for testing your "Risk Intelligence":

The Risk Intelligence Game, which consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. Then it calculates your risk intelligence quotient (RQ) on the basis of your estimates.

The Prediction Game, which provides you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

Comment author: Will_Newsome 01 April 2010 09:56:33PM 3 points [-]

An annoying thing about the RQ test (rot13'd):

Jura V gbbx gur ED grfg gurer jnf n flfgrzngvp ovnf gbjneqf jung jbhyq pbzzbayl or pnyyrq vagrerfgvat snpgf orvat zber cebonoyr naq zhaqnar/obevat snpgf orvat yrff cebonoyr. fgrira0461 nyfb abgvprq guvf. Guvf jnf nobhg 1 zbagu ntb. ebg13'q fb nf abg gb shegure ovnf crbcyrf' erfhygf.

Comment deleted 03 April 2010 06:54:45PM [-]
Comment author: Peter_Twieg 01 April 2010 05:33:15PM *  2 points [-]

I recently got into some arguments with foodies I know on the merits (or lack thereof) of organic / local / free-range / etc. food, and this is a topic where I find it very difficult to find sources of information that I trust as reflective of some sort of expert consensus (insofar as one can be said to exist.) Does anyone have any recommendations for books or articles on nutrition/health that holds up under critical scrutiny? I trust a lot of you as filters on these issues.

Comment author: RobinZ 01 April 2010 06:34:04PM 0 points [-]

Is the methodology of the Amanda Knox test useful in this case? (I didn't attempt the test or even read the posts, but it sounds like a similarly politicized problem.)

Comment author: komponisto 02 April 2010 07:35:23AM *  2 points [-]

An Amanda-Knox-type situation would be one where the priors are extreme and there are obvious biases and probability-theoretic errors causing people to overestimate the strength of the evidence.

I think one would have to know a fair amount of biochemistry in order for food controversies to seem this way.

Although one might potentially be able to apply the heuristic "look at which side has the more generally impressive advocates" -- which works spectacularly well in the Knox case -- to an issue like this.

Comment author: Jack 02 April 2010 02:06:59PM 0 points [-]

I thought Robin meant: Let the Less Wrong community sort through the information and see if there is a consensus arises on one side or the other. In this case no one has a "right answer" in mind, but we got a pretty conclusive, high confidence answer in the Knox case. Maybe we can do that here- we'd just need to put the time in (and have a well-defined question). Yes, there aren't many biochemists among us. But we all seem remarkably comfortable reading through studies and evaluating scientific findings on grounds of statistics, source credibility etc. Also, my uninformed guess is that a lot of the science is just going to consist of statistical correlations without a lot of deep biochemistry.

Comment author: RobinZ 02 April 2010 02:19:22PM 0 points [-]

I thought Robin meant: Let the Less Wrong community sort through the information and see if there is a consensus arises on one side or the other.

Oddly, no - although I think that would be a good exercise to carry out at intervals, I was imagining the theoretical solo game that each commenter played before bringing evidence to the community. Which has the difficulties that komponisto mentioned, of there not being prominent pro- and con- communities available, among other things.

Comment author: Jack 02 April 2010 03:16:27PM *  1 point [-]

I'm thinking:

  1. Define the claim/s precisely.
  2. Come up with a short list of pro and con sources
  3. Individual stage: anyone who wants to participate goes through the sources and does some addition research as they feel necessary.
  4. Each individual posts their own probability estimates for the claims.
  5. Communal stage: Disagreements are ironed out, sources shared, arguing and beliefs revised.
  6. Reflection: What, if anything, have we agreed on. It would be a lot harder than the Knox case but it is probably doable.
Comment author: RobinZ 02 April 2010 03:54:35PM 0 points [-]

Yes, that's it. I don't think enough time has passed to get around to another such exercise, however.

Comment author: Jack 02 April 2010 01:58:06PM 1 point [-]

It takes about an hour to familiarize yourself with all of the relevant information in the Knox case, I imagine it would take a lot longer in this case. It might still work though if enough people were willing to invest the time, especially since most people don't already have rigid, well-formed opinions on the issue.

Comment author: taw 02 April 2010 01:21:01PM 0 points [-]

The famous metaanalyses which has shown that vitamin supplementation is essentially useless, or possibly even harmful totally destroys the basic argument ("oh look, more vitamins!" - not that it's usually even true) that organic is good for your health.

It might still be tastier. Or not.

Comment author: hegemonicon 01 April 2010 05:46:36PM *  15 points [-]

After the top level post about it, I bought a bottle of Melatonin to try. I've been taking it for 3 weeks. Here are my results.

Background: Weekdays I typically sleep for ~6 hours, with two .5 hour naps in the middle of the day (once at lunch and once when I get home from work). Weekends I sleep till I feel like getting up, so I usually get around 10-11 hours.

I started with a 3mg pill, then switched to a ~1.5 mg pill (I cut them in half) after being extremely tired the next day. I take it about an hour before I go to sleep.

The first thing I noticed was that it makes falling asleep much easier. It's always been a struggle for me to fall asleep (usually I have to lay there for an hour or more), but now I'm almost always out cold within 20 minutes.

I've also noticed that I feel much less tired during the day, which was my impetus for trying it in the first place. However, I'm not sure how much of this is a result of needing less sleep, and how much is a result of me falling asleep faster and thus sleeping for longer. But it's definitely noticeable.

Getting up in the morning is not noticeably easier.

No evidence that it's habit forming. I'm currently not taking it on weekends (I found myself needing a nap even after getting 10-11 hours of sleep), and I don't notice any additional difficulty going to bed beyond what I would normally have.

I seemed to have more intense dreams the first several days taking it, but they seem to have gone back to normal (or I've gotten used to them/don't remember them).

Overall it seems to work (for me at least) exactly as gwern described, and I'd happily recommend it to anyone else who has difficulty sleeping.

Comment author: Jonathan_Graehl 01 April 2010 07:29:46PM 2 points [-]

The easily available product for me is a blend of 3mg melatonin/25mg theanine. 25mg is a heavy tea-drinker's dose, and I see no reason to consume theanine at all (even dividing the pills in half), so I haven't bought any.

Does anyone have some evidence recommending for/against taking theanine? In my view, the health benefits of tea drinking are negligible, and theanine is just one of many compounds in tea.

Comment author: wedrifid 01 April 2010 09:01:41PM 0 points [-]

From memory it is a 'mostly harmless' way to reduce anxiety and promote relaxation. This is a relatively rare result given that things with an anxiolytic effect often produce dependence. Works mostly by increasing GABA in the brain, with a bit of a boost to dopamine too. Some people find it also helps them focus.

See also sublutamine, a synthetic analogue. It is used to promote endurance, particularly the kind caused by residual lethargy that sometimes hangs around after depression. Also provides a stimulant effect while also being relaxing, or at least not as agitating as stimulants can tend to be.

Comment author: JenniferRM 02 April 2010 05:58:43AM *  5 points [-]

Theanine may be "one of many compounds found in tea" but, on the recommendation of an acquaintance I tried taking theanine itself as an experiment once (from memory maybe 100mg?). First I read up on it a little and it sounded reasonably safe and possibly beneficial and I drank green tea anyway so it seemed "cautiously acceptable" to see what it was like in isolation. Basically I was wondering if it helped me relax, focus, and/or learn better.

The result was a very dramatic manic high that left me incapable intellectually directed mental focus (as opposed to focus on whatever crazy thing popped into my head and flittered away 30 minutes later) for something like 35 hours. Also, I couldn't sleep during this period.

In retrospect I found it to be somewhat scary and it re-confirmed my general impression of the bulk of "natural" supplements. Specifically, it confirmed my working theory that the lack of study and regulation of supplements leads to a market full of many options that range from worthless placebo to dangerously dramatic, with tragically few things in the happy middle ground of safe efficacy.

Melatonin is one of the few supplements that I don't put in this category, however in that case I use less than "the standard" 3mg dose. When I notice my sleep cycle drifting unacceptably I will spend a night or two taking 1.5mg of melatonin (using a pill cutter to chop 3mg pills in half) to help me fall asleep and then go back to autopilot. The basis for this regime is that my mother worked in a hospital setting and 1.5mg was what various doctors recommended/authorized for patients to help them sleep.

There was a melatonin fad in the late 1990's(?) where older people were taking melatonin as a "youth pill" because endogenous production declines with age. I know of no good studies supporting that use, but around that time was when the results about sleep came out, showing melatonin to be effective even for "jet lag" as a way to reset one's internal clock swiftly and safely.

Comment author: Kevin 02 April 2010 06:15:44AM *  2 points [-]

That reaction sounds rare. Do you think 20 cups of tea would have triggered a similar reaction in you?

There is a huge variation based on dosage for all things you can ingest: food, drug, supplement, and "other". Check out the horrors of eating a whole bottle of nutmeg. http://www.erowid.org/experiences/subs/exp_Nutmeg.shtml

Comment author: gwern 03 April 2010 01:36:15AM 0 points [-]

Do you think 20 cups of tea would have triggered a similar reaction in you?

Who knows? I doubt she'll ever find out. 20 cups of tea is a lot. 10 or 15 cups will send you to the bathroom every half hour, assuming your appetite doesn't decline so much that you can't bring yourself to drink any more.

Comment author: alasarod 01 April 2010 11:54:18PM 4 points [-]

I took it for at least 8 weeks, primarily on weekdays. I found after a while that I was waking up at 4am, sometimes unable to get back to sleep. I had some night sweats too. May not be a normal response, but I found that if I take it in moderation it does not have these effects.

Comment author: [deleted] 02 April 2010 02:57:29PM 3 points [-]

I wonder if you need to get back to sleep after waking up at 4 AM.

Comment author: Liron 02 April 2010 12:12:21AM 2 points [-]

I also tried it out after reading that LW post. At first it was fantastic at getting me to fall asleep within 30 minutes (I'm a good sleeper, it would only take me 30 minutes because I would be going to sleep not tired in order to wake up earlier) and I would wake up feeling alert.

Now unfortunately I wake up feeling the same and basically have stopped noticing its effects. The only time I take it is when I want to go to sleep and I'm not tired.

Also: During the initial 1-2 week period of effectiveness, I had intense and vivid and stressful dreams (or maybe I simply remembered my normal dreams better).

Comment author: Jack 02 April 2010 12:30:42AM 0 points [-]

Thanks. It would be really helpful if people talking about their experiences would describe the entirety of their psychostimulant usage since how they interact and whether or not other drugs can be replaced are important things to know about Melatonin.

Comment author: hegemonicon 02 April 2010 03:56:40AM 1 point [-]

I am not any other drugs or medication. The only thing that would qualify as a stimulant is caffeine - I have a coffee in the morning and a soda at lunch.

Comment author: Matt_Simpson 02 April 2010 06:37:36PM *  2 points [-]

I've been trying it as well for ~2 months (with some gaps).

Normally I have trouble falling asleep, but have no problem staying asleep, so the main reason I take melatonin is to help fall asleep.

Currently, I take 2 5mg pills. Taking 1 doesn't have a very noticeable effect on my ability to fall asleep, but 2 seems to do the trick. However, I have to be sure that I give myself 7-8 hours for sleep, otherwise getting up is more difficult and I may be very groggy the next day. This can be problematic because sometimes I just have to stay up slightly later doing homework and because I can't take the melatonin I end up barely getting any sleep at all.

I haven't noticed any habit forming effects, though some slight effects might be welcome if it helped me to remember to take the supplement every night ;)

edit: its actually two 3mg pills, not 5mg. I googled the brand walmart carries since that's where I bought mine from, and it said 5mg on the bottle. Now that I'm home, I see that my bottle is actually 3mg.

Comment author: SilasBarta 01 April 2010 05:50:24PM *  0 points [-]

Question about Mach's principle and relativity, and some scattered food for thought.

Under Mach and relativity, it is only relative motion, including acceleration, that matters. Using any frame of reference, you predict the same results. GR also says that acceleration is indistinguishable from being in a gravitational field.

However, accelerations have one observable impact: they break things. So let's say I entered the gravitational field of a REALLY high-g planet. That can induce a force on me that breaks my bones. Yet I can define myself as being at rest and say that the planet is moving towards me. But my bones will still break. Why does a planet coming toward me cause my bones to break, even before I touch it, and there exists a frame in which I'm not undergoing acceleration?

I have an idea of how to answer this (something like, "actually, if I define myself as the origin, the entire universe is accelerating towards me, which causes some kind of gravitational waves which predict the same thing as me undergoing high g's"). But I bring it up because I'm trying to come up with a research program that expresses all the laws of physics in terms of information theory (kinda like the "it from bit" business you hear about, except with actual implications).

Relative energy levels have an informational interpretation: higher energy states are less likely, so less likely states convey more information. So structrual breakage can be explained in terms of the system attempting to store more information than it is capable of. Buckling (elastic instability), in turn, can be explained as when information is favored (via low energy levels) to be stored in a different degree of freedom than that in which the load is applied on.

Gravitational potential energy and kinetic energy from velocity also have an informational interpretation. So: how does this all come together to explain structural breakage under acceleration, in information-theoretic terms?

Comment author: wnoise 01 April 2010 05:55:28PM 6 points [-]

However, accelerations have one observable impact: they break things.

No. Moving non-rigidly breaks things. Differences in acceleration on different parts of things break things.

Comment author: rwallace 02 April 2010 10:43:25AM 2 points [-]

The classic pithy summary of this is "falling is harmless, it's the sudden stop at the end that kills you."

Comment author: SilasBarta 02 April 2010 03:49:57PM 0 points [-]

Yes, but the sudden stop is itself a (backwards) acceleration, which should be reproducible merely from a gravitational field.

(Anecdote: when I first got into aircraft interior monument analysis, I noticed that the crash conditions it's required to withstand include a forward acceleration of 9g, corresponding to a head-on crash. I naively asked, "wait, in a crash, isn't the aircraft accelerating backwards (aft)?" They explained that the criteria is written in the frame of reference of the objects on the aircraft, which are indeed accelerating forward relative to the aircraft.)

Comment author: wnoise 02 April 2010 04:53:34PM 0 points [-]

The sudden stop is a differential backwards acceleration. The front of the object gets hits and starts accelerating backwards while the back is not,

If you could stop something by applying a uniform 10000g to all parts of the object, it would survive none the worse for wear. If you can't, and only apply it to part, the object gets smushed or ripped apart.

Comment author: [deleted] 02 April 2010 04:31:19PM 5 points [-]

You know, really, neither falling nor suddenly stopping is harmful. The thing that kills you is that half of you suddenly stops and the other half of you gradually stops.

Comment author: SilasBarta 02 April 2010 04:45:45PM 0 points [-]

Well put. And the way I can fit this into an information-theoretic formalism is that one part of the body has high kinetic energy relative to the other, which requires more information to store.

Comment author: SilasBarta 02 April 2010 03:45:15PM *  0 points [-]

Actually, from a frame of reference located somewhere on the breaking thing, wouldn't it be the differences in relative positions (not accelerations) of its parts that causes the break? After all, breakage occurs when (there exists a condition equivalently expressible as that in which) too much elastic energy is stored in the structure, and elastic energy is a function of its deformation -- change in relative positions of its parts.

Comment author: JGWeissman 02 April 2010 04:08:33PM 2 points [-]

Yes, change in relative positions causes the break. But differences in velocities caused the change in relative positions. And differences in acceleration caused the differences in velocities.

Normally, you can approximate that a planet's gravitational field is constant with the region containing a person, so it will cause a uniform acceleration, that will change the person's velocity uniformly, which will not cause any relative change in position.

However, the strength of the gravitational field really varies inversely with the square of the distance to the center of the planet, so if the person's head is further from the the planet than their feet, their feet will be accelerated more than their head. This is known as gravitational shear. For small objects in weak fields, this effect is small enough not to be noticed.

Comment author: SilasBarta 02 April 2010 04:27:45PM 2 points [-]

Okay, thanks, that makes sense. So being in free fall in a gravitational field isn't really comparable to crashing into something, because the difference in acceleration across my body in free fall is very small (though I suppose could be high for a small, ultra-dense planet).

So, in free fall, the (slight) weaking gravitational field as you get farther from the planet should put your body in (minor) tension, since, if you stand as normal, your feet accelerate faster, pulling your head along. If you put the frame of reference at your feet, how would you account for your head appearing to move away from you, since the planet is pulling it in the direction of your feet?

Comment author: JGWeissman 02 April 2010 05:05:23PM 1 point [-]

If you put the frame of reference at your feet, how would you account for your head appearing to move away from you, since the planet is pulling it in the direction of your feet?

Your feet are in an accelerating reference frame, being pulled towards the planet faster than your head. One way to look at it is that the acceleration of your feet cancels out a gravitational field stronger than that experienced by your head.

Comment author: SilasBarta 02 April 2010 05:22:22PM 0 points [-]

But I've ruled that explanation out from this perspective. My feet are defined to be at rest, and everything else is moving relative to them. Relativity says I can do that.

Comment author: JGWeissman 02 April 2010 05:44:18PM *  1 point [-]

Relavtivity says that there are no observable consequences from imposing a uniform gravitational field on the entire universe. So, imagine that we turn on a uniform gravitational field that exactly cancels the gravitational field of the planet at your feet. Then you can use an inertial (non accelerating) frame centered at your feet. The planet, due to the uniform field, accelerates towards you. Your head experiences the gravitational pull of the planet, plus the uniform field. At the location of your head the uniform field is slightly stronger than is needed to cancel the planet's gravity, so your head feels a slight pull in the opposite direction, away from your feet.

An important principle here is that you have to apply the same transformation that lets you say your feet are at rest to the rest of the universe.

Comment author: Cyan 02 April 2010 10:28:26PM 1 point [-]

though I suppose could be high for a small, ultra-dense planet

Spaghettification.

Comment author: pengvado 02 April 2010 12:00:50PM 3 points [-]

Why does a planet coming toward me cause my bones to break, even before I touch it, and there exists a frame in which I'm not undergoing acceleration?

In a gravitational field steep enough to have nonnegligible tides (that is the phenomenon you were referring to, right?), there is no reference frame in which all parts of you remain at rest without tearing you apart. You can define some point in your head to be at rest, but then your feet are accelerating; and vice versa.

Comment author: Amanojack 01 April 2010 08:03:23PM 4 points [-]

Why doesn't brain size matter? Why is a rat with its tiny brain smarter than a cow? Why does the cow bother devoting all those resources to expensive gray matter? Eliezer posted this question in the February Open Topic, but no one took a shot at it.

FTA: "In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter."

This statement seems ripe for semantic disambiguation. Cows can "afford" a larger brain than rats can, and although "large cow brain < small rat brain", it seems highly likely that "large cow brain > small cow brain". The fact that a large cow brain is wildly inefficient compared to a more optimized smaller brain is irrelevant to natural selection, a process that "search[es] the immediate neighborhood of its present point in the solution space, over and over and over." It's not as if cow evolution is an intelligent being that can go take a peek at rat evolution and copy its processes.

Still, why don't we see such apparent resource-wasting in other organs? My guess is that the brain is special, in that

1) As with other organs, it seems plausible that the easiest/fastest "immediate neighbor" adaptation to selective pressure on a large animal to acquire more intelligence is simply to grow a larger brain.

2) But in contrast with other organs, if a larger brain is very expensive (hard for the rat to fit into tight places, scampers slower, requires much more food), there are other ways to dramatically improve brain performance - albeit ones that natural selection may be slower to hit upon. Why slower? Presumably because they are more complex, less suited to an "immediate neighbor" search, more suited to an intelligent search or re-design. (The evolution process would be even slower in large animals with longer life cycles.)

I bolded "dramatically" because the possibility of substantial intelligence gains by code optimization alone (without adding parallel processors, for instance) also seems to be a key factor in the AI "FOOM" argument. Maybe that's a clue.

Comment author: JamesAndrix 01 April 2010 09:21:12PM 1 point [-]

(I now see this answered in the first few comments on the link eliezer posted.)

Purely armchair neurology: To answer the question of why cow brains would need to be bigger than rat brains, I asked what would go wrong if we put a rat brain into a cow. (Ignoring organ rejection and cheese crazed, wall-eating cows)

We would need to connect the rat brain to the cow body, but there would not be a 1 to 1 correspondence of connections. I suspect that a cow has many more nerve endings throughout it's body. At least some of the brain/body correlation must be related to servicing the body nerves. (both sensory and motor)

Comment author: PhilGoetz 01 April 2010 09:46:30PM 4 points [-]

The cow needs more receptors, and more activators. However, this would lead one to expect the relationship of brain size to body size to follow a power-law with an exponent of 2/3 (for receptors, which are primarily on the skin); or of 1 (for activators, which might be in number proportional to volume). The actual exponent is 3/4. Scientists are still arguing over why.

Comment author: [deleted] 02 April 2010 03:40:54PM *  -1 points [-]

.

Comment author: PhilGoetz 02 April 2010 04:44:37PM *  4 points [-]

Can something be mathematical and yet not strict?

Overly-simple mathematical models don't always work in the real world.

Comment author: [deleted] 03 April 2010 08:10:03PM *  1 point [-]

.

Comment author: JamesAndrix 01 April 2010 09:30:47PM 3 points [-]

At the risk of repeating the same mistake as my previous comment, I'll do armchair genetics this time:

Perhaps genes controlling the size of various mammalian organs and body regions tend to grow or shrink uniformly, and only become disproportionate when there is a stronger evolutionary pressure. When there is a mutation leading to more growth, all the organs tend to grow more.

Comment author: rwallace 02 April 2010 10:36:23AM 4 points [-]

Be careful about making assumptions about the intelligence of cows. I used to think sheep were stupid, then I read that sheep can tell humans apart by sight (which is more than I can do for them!), and I realized on reflection I never had any actual reason to believe sheep were stupid, it was just an idea I'd picked up and not had any reason to examine.

Also, be careful about extrapolating from the intelligence of domestic cows (which have lived for the last few thousand years with little evolutionary pressure to get the most out of their brain tissue) to the intelligence of their wild relatives.

Comment author: Bo102010 02 April 2010 12:25:47PM 2 points [-]

I'm not sure if it's useful to speak of a domesticated animal's raw "intelligence" by citing how they interact with humans.

"Little evolutionary pressure" means "little NORMAL evolutionary pressure" for animals protected by humans. That is, surviving and propagating is less about withstanding normal natural situations, and more about successfully interacting with humans.

So, sheep/cows/dogs/etc. might have pools of genius in the area of "find a human that will feed you," and may be really dumb in almost other areas.

Comment author: [deleted] 02 April 2010 02:29:56PM *  0 points [-]

.

Comment author: Yvain 01 April 2010 08:11:37PM *  5 points [-]

The London meet is going ahead. Unless someone proposes a different time, or taw's old meetings are still going on and I just didn't know about them, it will be:

5th View cafe on top of Waterstone's bookstore near Piccadilly Circus Sunday, April 4 at 4PM

Roko, HumanFlesh, I've got your numbers and am hoping you'll attend and rally as many Londoners as you can.

EDIT: Sorry, Sunday, not Monday.

Comment author: RichardKennaway 02 April 2010 09:09:01AM 0 points [-]

I hope to get to this, as I'll be not too far away this weekend.

Comment author: ciphergoth 02 April 2010 09:56:28AM 3 points [-]

Found this entirely by chance - do a top level post?

Comment author: Eliezer_Yudkowsky 02 April 2010 10:57:33PM 1 point [-]

Do a top-level post.

Comment author: ciphergoth 03 April 2010 12:45:59PM 1 point [-]

Done. I hesitated as I wasn't in any sense the organiser of this event, just someone who had heard about it, but better me than no-one!

Comment author: taw 02 April 2010 12:19:40PM 0 points [-]

I'll try to come.

Comment author: Kutta 01 April 2010 08:16:33PM *  3 points [-]

PDF: "Are black hole starships possible?"

This paper examines the possibility of using miniature black holes for converting matter to energy via Hawking radiation, and propelling ships with that. Pretty interesting, I think.

I'm no physicist and not very math literate, but there is one issue I pondered: namely, how the would it be possible to feed matter to a mini black hole that has an attometer scale event horizon and radiating petajoules of energy in all directions? The black hole would be an extremely tiny target in a barrier of ridiculous energy density. The paper, as rudimentary it is, does not discuss this feeding issue.

Comment author: wnoise 01 April 2010 08:53:28PM 3 points [-]

I prefer links to the abstract, when possible.

http://arxiv.org/abs/0908.1803

Comment author: JenniferRM 02 April 2010 07:51:05PM *  3 points [-]

This might be interesting in combination with the a "balanced drive". They were invented by science fiction author Charles Sheffield who attributed them his character Arthur Morton McAndrew so they are sometimes also called a "McAndrew Drive" or a "Sheffield Drive".

The basic trick is to put an incredibly dense mass at the end of a giant pole such that the inverse square law of gravity is significant along the length of the pole. The ship flies "mass forward" through space. Then the crew cabin (and anything else incapable of surviving enormous acceleration) is set up on the pole so that the faster the acceleration the closer it is to the mass. The cabin, flying "floor forward", changes its position while the floor flexes as needed so that the net effect of the ship's acceleration plus the force of gravity balance out to something tolerable. When not under acceleration you still get gravity in the cabin by pushing it out to very tip of the pole.

The literary value of the system is that you can do reasonably hard science fiction and still have characters jaunt from star to star so long as they are willing to put up with the social isolation because of time dilation, but the hard part is explaining what the mass at the end of the pole is, and where you'd get the energy to move it.

If you could feed a black hole enough to serve as the mass while retaining the ability to generate Hawking radiation, that might do it. Or perhaps simply postulating technological control of quantum black holes and then use two in your ship: a big one to counteract acceleration and a small one to get energy from a "Crane-Westmoreland Generator".

Comment author: PhilGoetz 01 April 2010 09:40:57PM *  11 points [-]

I have a couple of problems with anthropic reasoning, specifically the kind that says it's likely we are near the middle of the distribution of humans.

First, this relies on the idea that a conscious person is a random sample drawn from all of history. Okay, maybe; but it's a sample size of 1. If I use anthropic reasoning, I get to count only myself. All you zombies were selected as a side-effect of me being conscious. A sample size of 1 has limited statistical power.

ADDED: Although, if the future human population of the universe were over 1 trillion, a sample size of 1 would still give 99% confidence.

Second, the reasoning requires changing my observation. My observation is, "I am the Xth human born." The odds of being the 10th human and the 10,000,000th human born are the same, as long as at least 10,000,000 humans are born. To get the doomsday conclusion, you have to instead ask, "What is the probability that I was human number N, where N is some number from 1 to X?" What justifies doing that?

Comment author: Gavin 03 April 2010 05:39:17PM 0 points [-]

The real problem with anthropic reasoning is that it's just a default starting point. We are tricked because it seems very powerful in contrived thought experiments in which no other evidence is available.

In the real world, in which there is a wealth of evidence available, it's just a reality check saying "most things don't last forever."

In real world situations, it's also very easy to get into a game of reference class tennis.

Comment author: Jordan 03 April 2010 05:55:22PM 0 points [-]

To get the doomsday conclusion, you have to instead ask, "What is the probability that I was human number N, where N is some number from 1 to X?" What justifies doing that?

Because we don't care about the probability of being a particular individual, we care about the probability of being in a certain class (namely the class of people born late enough in history, which is characterized exactly by one minus "the probability that I was human number N, where N is some number from 1 to X").

Comment author: RichardKennaway 01 April 2010 10:15:46PM *  0 points [-]

I decided the following quote wasn't up to Quotes Thread standard, but worth remarking on here:

Read a book a day.

Arthur C. Clarke, quoted in "Science Fictionisms".

I've never managed to do this. I've sometimes read a book in a day, but never day after day, although I once heard Jack Cohen say he habitually read three SF books and three non-fiction books a week.

How many books a week do you read, and what sort of books? (No lists of recommended titles, please.)

Comment author: Rain 01 April 2010 10:25:50PM *  0 points [-]

Tyler Cowen also reads voraciously.

I read up to 3 books a week, averaging around 0.3 due to long periods of avoiding them. The internet and Netflix are much more immediate and require less work.

I read primarily science fiction and fantasy, but I have lots of classical fiction and non-fiction as well, Great Books style.

Comment author: wnoise 01 April 2010 10:31:53PM 0 points [-]

I have read over 5 books a day. I generally read less than one book a week though, as there are so many other things to consume, e.g. on the Internet.

Comment author: Liron 02 April 2010 12:19:30AM *  0 points [-]

Read: 2 books a year :(

Listen on iPhone through audible.com subscription: 2-3 books a month

Plan to read on iPad once I buy it: Maybe one a month, and denser stuff than what they put on audio.

Comment author: Morendil 02 April 2010 06:38:16AM 0 points [-]

After a fallow period I'm back to two-three a month as a fairly regular rhythm. Fiction has been pretty much eliminated from my reading diet (ten years ago it used to make up the bulk of it).

Who else has a LibraryThing account or similar?

Comment author: RichardKennaway 02 April 2010 06:53:45AM *  0 points [-]

Who else has a LibraryThing account or similar?

I'm on LibraryThing here, but I don't keep it up to date (I did a bulk upload in 2006 and have hardly touched it since), and most of my books that are too old to have ISBNs aren't there. My primary book catalogue isn't online.

Comment author: RobinZ 02 April 2010 12:12:36PM 0 points [-]

I recently got a GoodReads account - mainly because (a) it's on my iPhone and (b) it is just a reading list, rather than an owning list, so editions and such aren't such a hassle.

Comment author: hegemonicon 02 April 2010 12:29:17PM 0 points [-]

I have a LibraryThing here, which I generally do a bulk update of every 2-3 months (whenever I'm reminded I have it).

Comment author: hegemonicon 02 April 2010 12:27:09PM 0 points [-]

I read about a book a week, almost exclusively non-fiction, generally falling somewhere between the popular science and textbook level. Occasionally I'll throw a sci-fi novel into the mix.

I'd love to speed this up, since my reading list grows much faster than books get completed, but I'm not sure how (other than simply spending more time reading). Has anyone had luck with speed-reading techniques , such as Tim Ferriss's?

Comment author: JenniferRM 02 April 2010 08:57:36PM 3 points [-]

In some periods of my life I've read about a book a day (almost entirely fiction), but I mostly look back at those periods with regret, because I suspect my reading was largely based on the desire to escape an unpleasant reality that I understood as inherent to reality rather than something contingent that I could do something about.

As an adult I have found myself reading non-fiction directed at life goals more often and fiction relatively less. Every so often I go 3 months without reading a book but other times I get through maybe 1 a week, but part of this is that non-fiction is generally just a slower read because they actually have substantive content that must be practiced or considered before it really sticks. With math texts I generally slow down to maybe 1 to 10 pages a day.

With non-fiction, I also tend to spend relatively a lot of time figuring out what to read, rather than simply reading it. When I become interested in a subject I don't mind spending several hours trying to work out the idea space and find "the best book" within that field.

I've never made efforts to learn speed reading because the handful of times I've met someone who claimed to be able to do it and was up for a test, their reading comprehension seemed rather low. We'd read the same thing and then I'd ask them about details of motivation or implication and they would have difficulty even remembering particular scenes or plot elements, leaving out their implications entirely.

With speed reading, I sometimes get the impression that people are aiming for "having read X" as the goal, rather than "having read X and learned something meaningful from it".

Comment author: gwern 03 April 2010 01:43:02AM 0 points [-]

The stuff Ferriss covers is normal enough. It's better to think of it as remedial reading techniques for people (most everyone) who don't read well than as speeding up past 'normal'. For example, if you're subvocalizing everything you read, You're Doing It Wrong. For your average LW reader, I'd suggest that anything below 300WPM is worth fixing.

Comment author: taw 02 April 2010 01:08:04PM 0 points [-]

I pretty much don't read paper books - this format and medium might as well die as far as I'm concerned. I listen to ridiculous number of audiobooks on Sansa Clip (in fast mode). The Bay has plenty, or Audible if you can stand their DRM. My favourite have been TTC lectures, which have zero value to me other than entertainment.

The idea for audiobooks was mostly to do something useful at times when I cannot do anything else - but it does use cognitive resources and makes me more tired than if I was listening to music or so for the same amount of time. It's very reliable relation.

Comment author: wheninrome15 02 April 2010 12:53:16AM 1 point [-]

Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces? If friendly AI is in fact not possible, then first generation AI may recognize this fact and not want to build a successor that would destroy the first generation AI in an act of unfriendliness.

It seems to me like the worst case would be that Friendly AI is in fact possible...but that we aren't the first to discover it. In which case AI would happily perpetuate itself. But what are the best and worst case scenarios conditioning on Friendly AI being IMpossible?

Has this been addressed before? As a disclaimer, I haven't thought much about this and I suspect that I'm dressing up the problem in a way that sounds different to me only because I don't fully understand the implications.

Comment author: RobinZ 02 April 2010 01:03:28AM 1 point [-]

Such an eventuality would seem to require that (a) human beings are not computable or (b) human beings are not Friendly.

In the latter case, if nothing else, there is [individual]-Friendliness to consider.

Comment author: Kevin 02 April 2010 01:16:51AM *  2 points [-]

I think human history has demonstrated that (b) is certainly true... sometimes I am surprised we are still here.

Comment author: RobinZ 02 April 2010 01:58:12AM 2 points [-]

The argument from (b)* is one of the stronger ones I've heard against FAI.

* Not to be confused with the argument from /b/.

Comment author: ata 02 April 2010 10:59:56AM 1 point [-]

Incidentally, /b/ might be good evidence for (b). It's a rather unsettling demonstration of what people do when anonymity has removed most of the incentive for signaling.

Comment author: taw 02 April 2010 01:23:24PM 2 points [-]

I find chans' lack of signaling highly intellectually refreshing. /b/ is not typical - due to ridiculously high traffic only meme-infested threads that you can reply to in 5 seconds survive. Normal boards have far better discussion quality.

Comment author: PhilGoetz 02 April 2010 02:14:20AM 1 point [-]

Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces?

First, define "friendly" in enough detail that I know that it's different from "will not blow up in our faces".

Comment author: RobinZ 02 April 2010 02:27:34AM 0 points [-]

Ooh, good catch! wheninrome15 may need to define "will not blow up in our faces" in more detail as well.

Comment author: alyssavance 02 April 2010 04:17:42AM *  2 points [-]

"Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea."

Attention everyone: This post is currently broken for some unknown reason. Please use the new post at http://lesswrong.com/lw/212/announcing_the_less_wrong_subreddit_2/ if you want to discuss the sub-Reddit. The address of the sub-Reddit is http://www.reddit.com/r/LessWrong

Comment author: JamesAndrix 02 April 2010 05:28:19AM 9 points [-]

http://www4.gsb.columbia.edu/ideasatwork/feature/735403/Powerful+Lies

The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars — the leaders —resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars — the subordinates — showed the usual signs of stress and slower reaction times. “Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal,” Carney explains.

Comment author: Vladimir_Nesov 02 April 2010 09:50:32AM *  3 points [-]

David Chalmers has written up a paper based on the talk he gave at 2009 Singularity Summit:

From the blog post where he announced the paper:

The main focus is the intelligence explosion that some think will happen when machines become more intelligent than humans. First, I try to clarify and analyze the argument for an intelligence explosion. Second, I discuss strategies for negotiating the singularity to maximize the chances of a good outcome. Third, I discuss issues regarding uploading human minds into computers, focusing on issues about consciousness and personal identity.

Comment author: timtyler 02 April 2010 12:18:08PM *  1 point [-]

Rather sad to see Chalmers embracing the dopey "singularity" terminology.

He seems to have toned down his ideas about development under conditions of isolation:

"Confining a superintelligence to a virtual world is almost certainly impossible: if it wants to escape, it almost certainly will."

Still, the ideas he expresses here are not very realistic, IMO. People want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but of course we won't keep these things permanently restrained on grounds of sheer paranoia - that would stop us from using them.

53 pages with only 2 mentions of zombies - yay.

Comment author: Vladimir_Nesov 02 April 2010 02:19:00PM *  0 points [-]

Sure there will be test harnesses

We can't test for values -- we don't know what they are. A negative test might be possible ("this thing surely has wrong values"), as a precaution, but not a positive test.

Comment author: timtyler 02 April 2010 02:29:40PM *  3 points [-]

Testing often doesn't identify all possible classes of flaw - e.g. see:

http://en.wikipedia.org/wiki/Unit_testing#Unit_testing_limitations

It is still very useful, nonetheless.

Comment author: Baughn 02 April 2010 07:37:15PM 24 points [-]

It doesn't seem like it's ever going to be mentioned otherwise, so I thought I should tell you this:

Lesswrong is writing a story, called "Harry Potter and the Methods of Rationality". It's just about what you'd expect; absolutely full of ideas from LW.com. I know it's not the usual fare for this site, but I'm sure a lot of you have enjoyed Eliezer's fiction as fiction; you'll probably like this as well.

Who knows, maybe the author will even decide to decloak and tell us who to thank?

Comment author: Alicorn 02 April 2010 08:20:09PM *  6 points [-]

I'm 98% confident it's Eliezer. He's been taunting us about a piece of fanfiction under a different name on fanfiction.net for some time. I guess this means I don't have to bribe him with mashed potatoes to get the URL after all.

Edit: Apparently, instead, I will have to bribe him with mashed potatoes for spoilers. Goddamn WIPs.

Comment author: Baughn 02 April 2010 08:36:51PM 3 points [-]

No, no, it's not Eliezer.

It's an alternate personality, which acts exactly the same and shares memories, that merely believes it's Eliezer.

Comment author: Kevin 03 April 2010 02:43:32AM *  3 points [-]

Sounds like an Eliezer to me.

Comment author: Larks 03 April 2010 01:28:37PM 3 points [-]

like an Eliezer, yes.

Comment author: Eliezer_Yudkowsky 02 April 2010 10:55:56PM 13 points [-]

Yeah, I don't think I can plausibly deny responsibility for this one.

Googling either (rationality + fanfiction) or even (rational + fanfiction) gets you there as the first hit, just so ya know...

Also, clicking on the Sitemeter counter and looking at "referrals" would probably have shown you a clickthrough from a profile called "LessWrong" on fanfiction.net.

Want to know the rest of the plot? Just guess what the last sentence of the current version is about before I post the next part on April 3rd. Feel free to post guesses here rather than on FF.net, since a flood of LW.com reviewers would probably sound rather strange to them.

Comment author: Alicorn 02 April 2010 11:14:09PM 3 points [-]

There is a reason I didn't look for it. It isn't done. Having found it anyway via link above, of course I read it because I have almost no self-control, but I didn't look for it!

Are you sure you wouldn't rather have the mashed potatoes? There's a sack of potatoes in the pantry. I could mash them. There's also a cheesecake in the fridge... I was thinking of making soup... should I continue to list food? Is this getting anywhere?

Comment author: CronoDAS 02 April 2010 11:52:23PM 5 points [-]

Do I have to guess right? ;)

Comment author: CronoDAS 03 April 2010 02:14:46AM 2 points [-]

You also have the approval of several Tropers, only one of which is me.

Comment author: Cyan 03 April 2010 02:33:53AM 4 points [-]

Holy fucking shit that was awesome.

Comment author: Kevin 03 April 2010 03:29:39AM *  4 points [-]

It gets a strong vote of approval from my girlfriend. She made it about halfway through Three Worlds Collide without finishing, for comparison. We'll see if I can get my parents to read this one...

Edit: And I think this is great. Looking forward to when Harry crosses over to the universe of the Ultimate Meta Mega Crossover.

Comment author: Kevin 03 April 2010 09:16:10PM 3 points [-]

Let's make that a Prediction. Harry becomes the ultimate Dark Lord by destroying the universe and escaping to the Metametaverse of the Ultimate Meta Mega Crossover.

Comment author: JGWeissman 03 April 2010 06:02:17AM 17 points [-]

"Oh, dear. This has never happened before..."

Voldemort's Killing Curse had an epiphenomenal effect: Harry is a p-zombie. ;)

Comment author: Unnamed 04 April 2010 03:38:20AM 8 points [-]

I don't like where this is headed - Harry isn't provably friendly and they're setting him loose in the wizarding world!

Comment author: Mass_Driver 04 April 2010 06:19:45AM 7 points [-]

Also, there is a sharply limited supply of people who speak Japanese, Hebrew, English, math, rationality, and fiction all at once. If it wasn't you, it was someone making a concerted effort to impersonate you.

Comment author: ShardPhoenix 04 April 2010 08:23:05AM *  2 points [-]

This is a lot of fun so far, though I think McGonnagal was in some ways more in the right than Harry in chapter 6. Also, I kind of feel like Draco's behavior here is a bit unfair to the wizarding world as portrayed in the canon - the wizarding world is clearly not at all medieval in many ways (especially in the treatment of women where the behavior we actually see is essentially modern), so I'm not sure why it should necessarily be so in that way. Regardless of my nitpicking it's a brilliant fanfic and it's nice to see muggle-world ideas enter the wizarding world (which always seemed like it should have happened already).

Comment author: Matt_Simpson 04 April 2010 06:34:40AM 2 points [-]

Edit: Apparently, instead, I will have to bribe him with mashed potatoes for spoilers. Goddamn WIPs.

I know, right? This would have been a wonderful story for me to read 10 years ago or so, and not just because now I'm having difficulty explaining to my girlfriend why I spent friday night reading a Harry Potter fanfic instead of calling her...

Comment author: CronoDAS 02 April 2010 08:39:41PM *  0 points [-]

Wow, I wish I saw this sooner. And there are already 99 pages of reviews!

ETA: Wow, now there's 100...

Comment author: JGWeissman 02 April 2010 08:52:23PM *  9 points [-]

My fellow Earthicans, as I discuss in my book Earth In The Balance and the much more popular Harry Potter And The Balance Of Earth, we need to defend our planet against pollution. As well as dark wizards.

-- Al Gore on Futurama

Comment author: Vladimir_Nesov 02 April 2010 09:02:18PM 1 point [-]

The probability of magic should make any effort on testing the hypothesis unjustified. Testing theories no matter how improbable is generally incorrect dogma. (One should distinguish improbable from silly though.)

Comment author: Alicorn 02 April 2010 09:08:27PM *  0 points [-]

You have not taken into account that testing magical hypotheses may be categorized as "play" and pay its rent on time and effort accordingly.

Comment author: Vladimir_Nesov 02 April 2010 09:36:15PM 2 points [-]

Then this activity shouldn't be rationalized as being the right decision specifically for the reasons associated with the topic of rationality. For example, the father dismissing the suggestion to test the hypothesis is correct, given that the mere activity of testing it doesn't present him with valuable experience.

You've just taken the conclusion presented in the story, and wrote above it a clever explanation that contradicts the spirit of the story.

Comment author: Baughn 02 April 2010 09:11:33PM 2 points [-]

It was strongly implied that some element of Harry's mind had skewed that prior dramatically. Perhaps his horcrux, perhaps infant memories, but either way it wasn't as you'd expect. Even for an eleven-year-old.

Comment author: Vladimir_Nesov 02 April 2010 09:38:46PM *  0 points [-]

He didn't bite the bullet, didn't truly disbelieve his corrupted hardware. This is a problem that has to be solved by introspection, better theory of decision-making. It's not enough to patch it by observation in each particular case, letting the reality compute a correct conclusion above your defunct epistemology, even when you have all the data you might possibly need to infer the conclusion yourself.

Comment author: Matt_Simpson 03 April 2010 05:58:38AM 1 point [-]

One of the goals was to get his parents to stop fighting over whether or not magic was real.

Comment author: Vladimir_Nesov 03 April 2010 09:38:54AM *  1 point [-]

How would it work? As expected outcome is that no magic is real, we'd need to convince the believer (mother) to disbelieve. An experiment is usually an ineffective means to that end. Rather, we'd need to mend her epistemology.

Comment author: Matt_Simpson 03 April 2010 09:02:52PM 1 point [-]

Well, Harry did spend some time making sure that this experiment would convince either of his parents if it went the appropriate way, though he had his misgivings. As a child who isn't respected by his parents, what better options does he have to stop the fight? (serious question)

Comment author: Vladimir_Nesov 03 April 2010 09:39:04PM 0 points [-]

Having no good options doesn't make the remaining options any good. This is a serious problem, for example, when people try to explain apparent miracles they experience: they find the best explanation they are able to come up with, and decide to believe that explanation, even if it has no right to any plausibility, apart from the fact it happened to be the only one available.

Comment author: Matt_Simpson 03 April 2010 09:58:29PM *  0 points [-]

So you think that the best response is to do nothing about the fight. Perhaps, but setting up the experiment didn't take that much effort. What was Harry's opportunity cost here? Is it that high?

Comment author: Vladimir_Nesov 03 April 2010 10:13:11PM *  0 points [-]

It's not completely out of the question that it was a fine rhetorical effort (though it's not particularly plausible), but it's still not concerned with finding out the truth, which was presented as the goal.

Comment author: Matt_Simpson 03 April 2010 10:33:26PM 1 point [-]

There seemed to be two goals to me - finding the truth and stopping the fight. I'll have to reread that section later.

Comment author: Eliezer_Yudkowsky 04 April 2010 01:31:13AM 10 points [-]

I think you underestimate the real-world value of Just Testing It. If I got a mysterious letter in the mail and Mom told me I was a wizard and there was a simple way to test it, I'd test it. Of course I know even better than rationalist!Harry all the reasons that can't possibly be how the ontologically lowest level of reality works, but if it's cheap to run the test, why not just say "Screw it" and test it anyway?

Harry's decision to try going out back and calling for an owl is completely defensible. You just never have to apologize for doing a quick, cheap experimental test, pretty much ever, but especially when people have started arguing about it and emotions are running high. Start flipping a coin to test if you have psychic powers, snap your fingers to see if you can make a banana, whatever. Just be ready to accept the result.

Comment author: Vladimir_Nesov 04 April 2010 08:13:21AM *  2 points [-]

You just never have to apologize for doing a quick, cheap experimental test, pretty much ever

This (injunction?) is equivalent to ascribing much higher probability to the hypothesis (magic) than it deserves. It might be a good injunction, but we should realize that at the same time, it asserts inability of people to correctly judge impossibility of such hypotheses. That is, this rule suggests that probability of some hypothesis that managed to make it in your conscious thought isn't (shouldn't be believed to be) 10^-[gazillion], even if you believe it is 10^-[gazillion].

Comment author: RobinZ 04 April 2010 01:36:00PM 2 points [-]

The probability that you have no grasp on the situation is high enough to justify an easy, simple, harmless test.

And I'd appreciate it if spoilers for the story were ROT13'd or something - I haven't read it.

Comment author: LucasSloan 02 April 2010 10:43:47PM *  1 point [-]

Fb, sebz gur cbvag bs ivrj bs na Nygreangr-Uvfgbel, V nffhzr gur CBQ vf Yvyyl tvivat va naq svkvat Crghavn'f jrvtug ceboyrz. Gung jbhyq graq gb vzcebir Crghavn'f ivrj bs ure zntvpny eryngvirf, naq V nffhzr gur ohggresyvrf nera'g rabhtu gb fnir Wnzrf naq Yvyyl sebz Ibyqrzbeg. Tvira gur infgyl vapernfrq vagryyvtrapr bs Uneel, V nffhzr ur vf abg trargvpnyyl gur fnzr puvyq jr fnj va gur obbxf, nygubhtu vzcebirq puvyqubbq ahgevgvba pbhyq nyfb or n snpgbe.

Comment author: Baughn 03 April 2010 11:17:15AM 0 points [-]

Not having the same father would tend to imply not being genetically the same, yes.

This isn't the Harry Potter we know.

Comment author: AdeleneDawner 03 April 2010 11:39:39AM *  4 points [-]

He does have the same genetic parents; it's his biological aunt, not his biological mother, who married someone different in this timeline.

I recently received your letter of acceptance to Hogwarts, addressed to Mr. H. Potter. You may not be aware that my genetic parents, James Potter and Lily Potter (formerly Lily Evans) are dead. I was adopted by Lily's sister, Petunia Evans-Verres, and her husband, Michael Verres-Evans.

Comment author: Baughn 03 April 2010 03:06:59PM 2 points [-]

I feel rather foolish now. Of course he does.

Should still be a genetic reshuffling, at least. The point of departure seems to be before his birth, so the butterfly effect would be in effect.

Comment author: Unnamed 02 April 2010 11:49:56PM *  3 points [-]

Harry Potter as a boy genius smart-aleck aspiring rationalist works surprisingly well. And the idea of extending the pull of rationalism a bit beyond its standard sci-fi hunting grounds using Harry Potter fanfiction is brilliant.

Comment author: SforSingularity 03 April 2010 03:51:10PM *  4 points [-]

As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.

Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.

Discuss.

Comment author: cousin_it 03 April 2010 03:57:51PM 0 points [-]

Is that true? 'Cause if it's true, I'd like to join.

Comment author: Rain 03 April 2010 07:08:57PM *  1 point [-]

You should present the easily implemented, obviously better solution at the same time as the problem.

If the solution isn't easy to implement by the person you're talking to, then cost/benefit analysis may be in favor of the status quo or you might be talking to the wrong person. If the solution isn't obviously better, then it won't be very convincing as a solution or you might not have considered all opinions on the problem. And if there is no solution, then why complain?

Comment author: Mass_Driver 04 April 2010 06:34:59AM 3 points [-]

You try frantically to tell people about this, and it always seems to go badly for you.

Telling people frantically about problems that are not on a very short list of "approved emergencies" like fire, angry mobs, and snakes is a good way to get people to ignore you, or, failing that, to dislike you.

It is only very recently (in evolutionary time) that ordinary people are likely to find important solutions to important social problems in a context where those solutions have a realistic chance of being implemented. In the past, (a) people were relatively uneducated, (b) society was relatively simpler, and (c) arbitrary power was held and wielded relatively more openly.

Thus, in the past, anyone who was talking frantically about social reform was either hopelessly naive, hopelessly insane, or hopelessly self-promoting. There's a reason we're hardwired to instinctively discount that kind of talk.

Comment author: Kevin 03 April 2010 09:40:17PM 6 points [-]

Applied rationality April Edition: convince someone with currently incurable cancer to sign up for cryonics: http://news.ycombinator.com/item?id=1239055

Hacker News rather than Reddit this time, which makes it a little easier.

Comment author: taw 04 April 2010 02:15:07AM 4 points [-]

Is there any evidence that Bruce Bueno de Mesquita is anything else than a total fraud?

Am I missing something here?

Comment author: Matt_Simpson 04 April 2010 06:32:06AM 0 points [-]

I've heard claims that his "general model of international conflict" has been independently tested by the CIA and some other organization to 90% accuracy, but haven't seen any details of any of these tests.

Comment author: taw 04 April 2010 08:15:53AM 0 points [-]

Oh he gives plenty of such claims, not a single one of them are independently verifiable. You cannot access such report. This increases my estimation he's a fraud relative to not giving such claims in the first place.

Comment author: ciphergoth 04 April 2010 08:56:38AM 0 points [-]

That review is a very worthwhile read - thanks for linking to it!

Comment author: Mass_Driver 04 April 2010 06:26:18AM *  2 points [-]

Does anyone have suggestions for how to motivate sleep? I've hacked all the biological problems so that I can actually fall asleep when I order it, but me-Tuesday generally refuses to issue an order to sleep until it's late enough at night that me-Wednesday will sharply regret not having gone to bed earlier.

I've put a small effort into setting a routine, and another small effort into forcing me-Tuesday to think about what I want to accomplish on Wednesday and how sleep will be useful for that; neither seems to be immediately useful. If I reorganize my entire day around motivating an early bedtime, that often works, but at an unacceptably high cost; the point of going to bed early is to have more surplus time/energy, not to spend all of my time/energy on going to bed.

I am happy to test various hypotheses, but don't have a good sense of which hypotheses to promote or how to generate plausible hypotheses in this context.

Comment author: RobinZ 04 April 2010 01:31:25PM *  1 point [-]

What do you do instead of going to bed? I notice myself spending time on the Internet.

Comment author: Amanojack 04 April 2010 05:53:31PM *  1 point [-]

I've been struggling with this for years, and the only thing I've found that works when nothing else does is hard exercise. The other two things that I've found help the most:

  • Let the sun hit your eyelids first thing in the morning (to halt melatonin production)
  • F.lux, a program that auto-adjusts your monitor's light levels (and keep your room lights low at night; otherwise melatonin production will be delayed)

EDIT: Apparently keeping your room lights at a low color temperature (incandescent/halogen instead of fluorescent) is better than keeping them at low intensity:

"...we surmise that the effect of color temperature is greater than that of illuminance in an ordinary residential bedroom or similar environment where a lowering of physiological activity is desirable, and we therefore find the use of low color temperature illumination more important than the reduction of illuminance. Subjective drowsiness results also indicate that reduction of illuminance without reduction of color temperature should be avoided." —Noguchi and Sakaguchi, 1999 (note that these are commercial researchers at Matsushita, which makes low-color-temperature fluorescents)

Comment author: Nick_Tarleton 04 April 2010 06:26:51PM *  2 points [-]

Melatonin. Also, getting my housemates to harass me if I don't go to bed.

Comment author: Zubon 04 April 2010 05:31:09PM 17 points [-]

Example of teachers not getting past Guessing the Teacher's Password: debating teachers on the value of pi. Via Gelman.

Comment author: Eliezer_Yudkowsky 04 April 2010 06:52:28PM 3 points [-]

AAAAAIIIIIIIIEEEEEEEE

BOOM