How would one even start "serious design work on a self-replicating spacecraft"? It seems like the technologies that would require to even begin serious design do not exist yet.
That doesn't seem like the consensus view to me. It might be the consensus view among LessWrong contributors. But in the AI-related tech industry and in academia it seems like very few people think AI friendliness is an important problem, or that there is any effective way to research it.
Another problem you need to avoid is misjudging how much money you would actually save. That seems more common when the pain of misjudgment is shared.
Ah, it's just like "What Would Jesus Do" bracelets.
I know, but writing is hard :-( Also, I have made it way too hard for myself. It's easy to write notes about the personality of a completely non-human character, as long as you can intellectually understand its reasoning. But once I am forced to actually write its dialog, my head just hits a brick wall. The being is very intelligent and I want this to be rationalist fiction, so I have to think for a very long time just to find out in what exact way it would phrase its requests to maximize the probability of compliance. Writing the voices of the narrators/the administrator AIs of the simulation as they are slowly going insane is not easy, either.
Maybe I'm too perfectionist here. Do you think it's better to write something trashy first and rewrite it later, or is it more efficient to do it right the first time?
It seems like picking this nonhuman metafictional stuff is a tough way to start writing. Maybe pick something easier just so that you succeed without getting so much writers' block.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
http://www.rfreitas.com/Astro/GrowingLunarFactory1981.htm
Hmm, it is interesting that that exists but it seems like it is cannot have been very serious because it dates from over 30 years ago and no followup activity happened.