You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Bart119 comments on Don't plan for the future - Less Wrong Discussion

1 Post author: PhilGoetz 23 January 2011 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: Bart119 26 April 2012 02:48:22PM 0 points [-]

Hmmmm. Nearly two days and no feedback other than a "-1" net vote. Brainstorming explanations: 1. There is so much wrong with it no one sees any point in engaging me (or educating me). 2. It is invisible to most people for some reason. 3. Newbies post things out of synch with accepted LW thinking all the time (related to #1) 4. No one's interested in the topic any more. 5. The conclusion is not a place anyone wants to go. 6. The encouragement to thread necromancy was a small minority view or intended ironically. 7. More broadly, there are customs of LW that I don't understand. 8. Something else.

Comment author: C9AEA3E1 27 April 2012 09:10:26PM 1 point [-]

Likely, few people read it, maybe just one voted, and that's just one, potentially biased opinion. The score isn't significant.

I don't see anything particularly wrong with your post. Its sustaining ideas seems similar to the Fermi paradox, and the berserker hypothesis. From which you derive that a great filter lies ahead of us, right?

Comment author: Bart119 28 April 2012 02:32:30AM 1 point [-]

Thank you so much for the reply! Simply tracing down the 'berserker hypothesis' and 'great filter' puts me in touch with thinking on this subject that I was not aware of.

What I thought might be novel about what I wrote included the idea that independent evolution of traits was evidence that life should progress to intelligence a great deal of the time.

When we look at the "great filter" possibilities, I am surprised that so many people think that our society's self-destruction is such a likely candidate. Intuitively, if there are thousands of societies, one would expect a high variability in social and political structures and outcomes. The next idea I read, that "no rational civilization would launch von Neuman probes" seems extremely unlikely because of that same variability. Where there would be far less variability is mundane constraints of energy and engineering to launch self-replicating spacecraft in a robust fashion. Problems there could easily stop every single one of our thousand candidate civilizations cold, with no variability.

Comment author: C9AEA3E1 01 May 2012 12:20:41PM *  2 points [-]

Yes, the current speculations in this field are of wildly varying quality. The argument about convergent evolution is sound.

Minor quibble about convergent evolution which doesn't change the conclusion much about there being other intelligent systems out there.

All organisms on Earth share some common points (though there might be shadow biospheres), like similar environmental conditions (a rocky planet with a moon, a certain span of temperatures, etc.), a certain biochemical basis (proteins, nucleic acids, water as a solvent, etc.). I'd distinguish convergent evolution within the same system of life on the one hand, and convergent evolution in different systems of life on the other. We have observed the first, and they both likely overlap, but some traits may not be as universal as we'd be lead to think.

For instance, eyes may be pretty useful here, but deep in the oceans of a world like Europa, provided life is possible there, they might not (an instance of the environment conditioning what is likely to evolve).