All of lacker's Comments + Replies

lacker10

Hmm, it is interesting that that exists but it seems like it is cannot have been very serious because it dates from over 30 years ago and no followup activity happened.

0Eniac
I think this is because Freitas and Drexler and others who might have pursued clanking replicators became concerned with nanotechnology instead. It seems to me that clanking replicators are much easier, because we already have all the tools and components to build them (screwdrivers, electic motors, microchips, etc.). Nanotechnology, while incorporating the same ideas, is far less feasible and may be seen as a red herring that has cost us 30 years of progress in self-replicating machines. Clanking replicators are also much less dangerous, because it is much easier to pull the plug or throw in a wrench when something goes wrong.
3[anonymous]
Serious? It's a paper constructed as part of an official NASA workshop, the participants of which are all respected people in their fields and still working. Why hasn't more work happened in the time since? It has at places like Zyvex and the Institute for Molecular Manufacturing. But at NASA there were political issues that weren't addressed at all by people advocating for a self-replication programme then or since. Freitas has more recently done a book-length survey of work on self-replicating machines before and after the NASA workshop. It's available online: http://www.molecularassembler.com/KSRM.htm (BTW the same fallacy could be committed against AGI or molecular nanotechnology, both of which date to the 50's but have had little followup activity since, except spurts of interest here and there.)
lacker40

How would one even start "serious design work on a self-replicating spacecraft"? It seems like the technologies that would require to even begin serious design do not exist yet.

1Algernoq
The difficulty is in managing the complexity of an entire factory system. The technology for automated mining and manufacturing exists but it's very expensive and risky to develop, and a bit creepy, so politicians won't fund the research. On Earth, human labor is cheap so there's no incentive for commercial development either.
3[anonymous]
http://www.rfreitas.com/Astro/GrowingLunarFactory1981.htm
lacker70

That doesn't seem like the consensus view to me. It might be the consensus view among LessWrong contributors. But in the AI-related tech industry and in academia it seems like very few people think AI friendliness is an important problem, or that there is any effective way to research it.

8passive_fist
Most researchers I know seem to think strong AI (of the type that could actually result in an intelligence explosion) is a long way away and thus it's premature to think about friendliness now (imagine if they tried to devise rules to regulate the internet in 1950). I don't know if that's a correct viewpoint or not.
1JoshuaMyer
But wouldn't it be awesome if we came up with an effective way to research it?
5[anonymous]
I believe this thread is about LessWrong specifically.
lacker00

Another problem you need to avoid is misjudging how much money you would actually save. That seems more common when the pain of misjudgment is shared.

lacker30

Ah, it's just like "What Would Jesus Do" bracelets.

sboo100

Rational!Jesus

We have the next HPMOR.

lacker20

It seems like picking this nonhuman metafictional stuff is a tough way to start writing. Maybe pick something easier just so that you succeed without getting so much writers' block.