Mitchell_Porter comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong

26 Post author: AnnaSalamon 19 May 2010 08:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: snarles 21 May 2010 06:55:39PM *  1 point [-]

Let me continue to play Devil's Advocate for a second, then. There are many reasons why attempting to influence the far future might not be the most important task in the world.

The one I've already mentioned, indirectly, is the idea that it becomes super-exponentially futile to predict the consequences of your actions the farther in the future you go. For instance, SIAI might raise awareness of AI to the extent that regulations are passed, and no early AI accidents happen: however, this causes complacency that does allow a large AI accident to happen; whereas if SIAI had never existed, and an early AI Chernobyl did occur, this would have prompted the governments to take effective measures to regulate AI.

Another viewpoint is the bleak but by no means indefensible idea that it is impossible to prevent all existential disasters: the human race, or at least our values, will inevitably be reduced to inconsequence one way or another, and the only thing we can do is simply to reduce the amount of suffering in the world right now.

These are no reasons to give up, either, but the fact is that we simply don't know enough to say anything about the non-near future with any confidence. That's no reason to give up, of course, in fact--our lack of understanding makes it more valuable to try to improve our understanding of the future, as SIAI is doing. So maybe make that you official stated goal: simply to understand if there's even a possibility of influencing the future--it is a noble and defensible goal by itself. But even then, arguably not the most important thing in the world.

Comment author: Mitchell_Porter 21 May 2010 11:50:25PM 1 point [-]

There are many reasons why attempting to influence the far future might not be the most important task in the world.

I wouldn't even present that as a reason for caring. Superhuman AI is an issue of the near future, not the far future. Certainly an issue of the present century; I'd even say an issue of the next twenty years, and that's supposed to be an upper bound. Big science is deconstructing the human brain right now, every new discovery and idea is immediately subject to technological imitation and modification, and we already have something like a billion electronic computers worldwide, networked and ready to run new programs at any time. We already went from "the Net" to "the Web" to "Web 2.0", just by changing the software, and Brain 2.0 isn't far behind.

Comment author: Daniel_Burfoot 25 May 2010 12:50:18AM *  2 points [-]

Certainly an issue of the present century; I'd even say an issue of the next twenty years, and that's supposed to be an upper bound.

Are you familiar with the state of the art in AI? If so, what evidence do you see for such rapid progress? Note that AI has been around for about 50 years, so your timeframe suggests we've already made 5/7 of the total progress that ever needs to be made.

Comment author: orthonormal 25 May 2010 02:03:57AM *  1 point [-]

Well, this probably won't be Mitchell's answer, but to me it's obvious that an uploaded human brain is less than 50 years away (if we avoid civilization-breaking catastrophes), and modifications and speedups will follow. That's a different path to AI than an engineered seed intelligence (and I think it reasonably likely that some other approach will succeed before uploading gets there), but it serves as an upper bound on how long I'd expect to wait for Strong AI.

Comment author: Mitchell_Porter 26 May 2010 03:41:53AM 0 points [-]

There are many synergetic developments: Internet data centers as de facto supercomputers. New tools of intellectual collaboration spun off from the mass culture of Web 2.0. If you have an idea for a global cognitive architecture, those two developments make it easier than ever before to get the necessary computer time, and to gather the necessary army of coders, testers, and kibitzers.

Twenty years is a long time in AI. That's long enough for two more generations of researchers to give their all, take the field to new levels, and discover the next level of problems to overcome. Meanwhile, that same process is happening next door in molecular and cognitive neuroscience, and in a world which eagerly grabs and makes use of every little advance in machine anthropomorphism, and in which every little fact about life already has its digital incarnation. The hardware is already there for AI, the structure and function of the human brain is being mapped at ever finer resolution, and we have a culture which knows how to turn ideas into code. Eventually it will come together.

Comment author: JoshuaZ 24 May 2010 11:49:19PM 2 points [-]

We already went from "the Net" to "the Web" to "Web 2.0", just by changing the software, and Brain 2.0 isn't far behind.

How much of the change from "the Net" to "the Web" to "Web 2.0" is actually noteworthy changes and how much is marketing? I'm not sure what precisely you mean by Brain 2.0, but I suspect that whatever definition you are using makes for a much wider gap between Brain and Brain 2.0 than the gap between The Web and The Web 2.0 (assuming that these analogies have any degree of meaning).