It could be quite valuable to translate that material.
Can someone recommend good Russian learning material? Preferably something that could be found online (books count).
Thank you so much for the reply! Simply tracing down the 'berserker hypothesis' and 'great filter' puts me in touch with thinking on this subject that I was not aware of.
What I thought might be novel about what I wrote included the idea that independent evolution of traits was evidence that life should progress to intelligence a great deal of the time.
When we look at the "great filter" possibilities, I am surprised that so many people think that our society's self-destruction is such a likely candidate. Intuitively, if there are thousands of societies, one would expect a high variability in social and political structures and outcomes. The next idea I read, that "no rational civilization would launch von Neuman probes" seems extremely unlikely because of that same variability. Where there would be far less variability is mundane constraints of energy and engineering to launch self-replicating spacecraft in a robust fashion. Problems there could easily stop every single one of our thousand candidate civilizations cold, with no variability.
Yes, the current speculations in this field are of wildly varying quality. The argument about convergent evolution is sound.
Minor quibble about convergent evolution which doesn't change the conclusion much about there being other intelligent systems out there.
All organisms on Earth share some common points (though there might be shadow biospheres), like similar environmental conditions (a rocky planet with a moon, a certain span of temperatures, etc.), a certain biochemical basis (proteins, nucleic acids, water as a solvent, etc.). I'd distinguish convergent evolution within the same system of life on the one hand, and convergent evolution in different systems of life on the other. We have observed the first, and they both likely overlap, but some traits may not be as universal as we'd be lead to think.
For instance, eyes may be pretty useful here, but deep in the oceans of a world like Europa, provided life is possible there, they might not (an instance of the environment conditioning what is likely to evolve).
To the best of my knowledge, there is nothing quite like SIAI or lesswrong in continental western Europe. People aren't into AI as much as in the US, and if there's rationality thinking being done, it's mostly traditional rationality, skepticism, etc.
Atheism can score high in many countries, as a rule of thumb countries to the north are more atheistic, those to the south (Spain, Portugal, Italy, etc.) are more religious.
There are a few scattered transhumanist as well as a few life-extension organizations, which are loosely starting to cooperate together.
The European commission itself started prioritizing small-scale healthy life extension a year or two ago. This could help focus more people on such questions in the years to come.
Hmmmm. Nearly two days and no feedback other than a "-1" net vote. Brainstorming explanations: 1. There is so much wrong with it no one sees any point in engaging me (or educating me). 2. It is invisible to most people for some reason. 3. Newbies post things out of synch with accepted LW thinking all the time (related to #1) 4. No one's interested in the topic any more. 5. The conclusion is not a place anyone wants to go. 6. The encouragement to thread necromancy was a small minority view or intended ironically. 7. More broadly, there are customs of LW that I don't understand. 8. Something else.
Likely, few people read it, maybe just one voted, and that's just one, potentially biased opinion. The score isn't significant.
I don't see anything particularly wrong with your post. Its sustaining ideas seems similar to the Fermi paradox, and the berserker hypothesis. From which you derive that a great filter lies ahead of us, right?
Our bodies need to perform different roles as we age and mature. We'd also need different sets of skills depending on our current developmental phase. It would make sense for our brains to change too, that the developmental path of our brain is planned to make it undergo changes that'd make it more adapted to the tasks it'll have to tackle over different developmental phases.
It'd make sense for our brain to be more fine tuned for grabbing resources from family when we're a kid, to grow as fast as possible, then better tuned to search for sexual partners once we're getting mature, and lastly, more fine tuned to take care of our kids once we got them.
And if there's a mechanism which makes our brain undergo developmental changes along a pre-planned path, then we might also expect that past the age at which we reproduce, there'd be less and less evolutionary pressure to shape that developmental trajectory.
I don't think either that evolution would have much of a reason to cleanly engineer a stable end-state after which development just entirely stops, and leaves you with a well-adjusted, perfectly functional body or brain. That may not be a trivial task after all.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
PZ's comment regarding the implausibility of speeding up an emulated brain was a real head-scratcher to me, and Andrew G calls him on it in the comments. Apparently (judging from his further comments) what he really meant was that you have to simulate or emulate a good environment, physiology, and endocrine system as well otherwise the brain would go insane.
Of course, we already knew that...
Seems similar enough to "Every part of your brain assumes that all the other surrounding parts work a certain way. The present brain is the Environment of Evolutionary Adaptedness for every individual piece of the present brain.
Start modifying the pieces in ways that seem like "good ideas"—making the frontal cortex larger, for example—and you start operating outside the ancestral box of parameter ranges. And then everything goes to hell.
So you'll forgive me if I am somewhat annoyed with people who run around saying, "I'd like to be a hundred times as smart!" as if it were as simple as scaling up a hundred times instead of requiring a whole new cognitive architecture."
Eliezer Yudkowsky, Growing Up is Hard