NancyLebovitz comments on Open Thread June 2010, Part 2 - Less Wrong

7 Post author: komponisto 07 June 2010 08:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (534)

You are viewing a single comment's thread. Show more comments above.

Comment author: NancyLebovitz 15 June 2010 10:54:50AM *  3 points [-]

Speaking of things to be worried about other than AI, I wonder if a biotech disaster is a more urgent problem, even if less comprehensive

Part of what I'm assuming is that developing a self-amplifying AI is so hard that biotech could be well-developed first.

While it doesn't seem likely to me that a bio-tech disaster could wipe out the human race, it could cause huge damage-- I'm imagining diseases aimed at monoculture crops, or plagues as the result of terrorism or incompetent experiments.

My other assumptions are that FAI research is dependent on a wealthy, secure society with a good bit of surplus wealth for individual projects, and is likely to be highly dependent on a small number of specific people for the forseeable future.

On the other hand, FAI is at least a relatively well-defined project. I'm not sure where you'd start to prevent biotech disasters.

Comment author: NihilCredo 12 April 2011 11:53:25PM 3 points [-]

On the other hand, FAI is at least a relatively well-defined project

That's one hell of a "relatively" you've got there!