You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

NancyLebovitz comments on What happens when your beliefs fully propagate - Less Wrong Discussion

20 Post author: Alexei 14 February 2012 07:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread.

Comment author: NancyLebovitz 14 February 2012 07:14:10PM 5 points [-]

In re FAI vs. snoozing: What I'd hope from an FAI is that it would know how much rest I needed. Assuming that you don't need that snoozing time at all strikes me as a cultural assumption that theories (in this case, possibly about willpower, productivity, and virtue) should always trump instincts.

A little about hunter-gatherer sleep. What I've read elsewhere is that with an average of 12 hours of darkness and an average need for 8 hours of sleep, hunter-gathers would not only have different circadian rhythms (teenagers tend to run late, old people tend to run early), but a common pattern was to spend some hours in the middle of the night for talk, sex, and/or contemplation. To put it mildly, this pattern in not available for the vast majority of modern people, and we don't know what if anything this is costing.

I think of FAI as being like gorillas trying to invent a human-- a human which will be safe for gorillas, but I may be unduly pessimistic.

I'm inclined to think that raising the sanity waterline is more valuable than you do for such a long range project-- FAI is so dependent on a small number of people, and I think it will continue to be so. Improved general conditions means that the odds of someone who would be really valuable not having their life screwed up early are improved.

On the other hand, this is a "by feel" argument, and I'm not sure what I might be missing.

Comment author: NancyLebovitz 22 February 2012 09:45:14PM 2 points [-]
Comment author: David_Gerard 14 February 2012 07:18:41PM *  4 points [-]

I think of FAI as being like gorillas trying to invent a human-- a human which will be safe for gorillas, but I may be unduly pessimistic.

Leave out "artificial" - what would constitute a "human-friendly intelligence"? Humans don't. Even at our present intelligence we're a danger to ourselves.

I'm not sure "human-friendly intelligence" is a coherent concept, in terms of being sufficiently well-defined (as yet) to say things about. The same way "God" isn't really a coherent concept.