nhamann comments on Preface to a Proposal for a New Mode of Inquiry - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (83)
And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.
Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI?
If the OP wishes to make a career in AGI research, he can do so responsibly by affiliating himself with SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI. They will probably share their insights with him only after a lengthy probationary period during which they vigorously check him for signs that he might do something irresponsible once they have taken him into their confidence. (ADDED. If it were me, I would look mainly for signs that the candidate might make a choice which tends to have a bad effect on the global situation, but a positive effect on his or her scientific reputation or on some other personal agenda that humans typically care about.) And they will probably share their insights with him only after he has made a commitment to stay with the group for life.
I don't buy that that's a good approach, though. This seems more like security through obscurity to me: keep all the work hidden, and hope that it's both a) on the right track and b) that no one else stumbles upon it. If, on the other hand, AI discussion did take place on LW, then that gives us a chance to frame the discussion and ensure that FAI is always a central concern.
People here are fond of saying "people are crazy, the world is mad," which is sadly true. But friendliness is too important an issue for SIAI and the community surrounding it to set itself up as stewards of humanity; every effort needs to be made to bring this issue to the forefront of mainstream AI research.
I agree, which is why I wrote, "SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI". If for some reason, the OP does not wish to or is not able to join one of the existing responsible groups, he can start his own.
In security through obscurity, a group relies on a practice they have invented and kept secret when they could have chosen instead to adopt a practice that has the benefit of peer review and more testing against reality. Well, yeah, if there exists a practice that has already been tested extensively against reality and undergone extensive peer review, then the responsible AGI groups should adopt it -- but there is no practice like that for solving this particular problem. There are no good historical examples of the current situation with AGI, but the body of practice with the most direct applicability that I can think of right now is the situation during and after WW II in which the big military powers mounted vigorous systematic campaigns that lasted for decades to restrict the dissemination of certain kind of scientific and technical knowledge. Let me remind that in the U.S. this campaign included the requirement for decades that vendors of high-end computer hardware and machine tools obtain permission from the Commerce Department before exporting any products to the Soviets and their allies. Before WW II, other factors (like wealth and the will to continue to fight) besides scientific and technical knowledge dominated the list of factors that decided military outcomes.
Note the current plan of the SIAI for what the AGI should do after it is created is to be guided by an "extrapolation" that gives equal weight to the wishes or "volition" of every single human living at the time of the creation of the AGI, which IMHO goes a very long way to aleviating any legit concerns of people who cannot joing one of the responsible AGI groups.