katydee comments on On saving the world - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (166)
There are a number of factors here. Timescales are certainly important. I obviously can't re-organize people at will. Even in a best-case scenario, it would take decades or even centuries to transition social systems, to shift away from governments and nations, and so on. If I believed AI would take millennia, then I'd keep addressing coordination problems. However, AI is also on the decades-to-centuries timescale.
Furthermore, developing an FAI would (depending upon your definition of 'friendly') address coordination problems. Whether my ideas were flawed or not, developing FAI dominates social restructuring.
I'm not quite sure what you mean. Are you asking the historical date at which I believe the value of a person-hour spent on AI research overtook the value of a person-hour spent on restructuring people? I'd guess maybe 1850, in hopes that we'd be ready to build an FAI as soon as we were able to build a computer. This seems like a strange counterfactual to me, though.
It would have been kind of impossible to work on AI in 1850, before even modern set theory was developed. Unless by work on AI, you mean work on mathematical logic in general.