You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on Q&A with experts on risks from AI #1 - Less Wrong Discussion

29 Post author: XiXiDu 08 January 2012 11:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread.

Comment author: cousin_it 09 January 2012 12:20:40AM *  9 points [-]

After taking a look at the research pages, I'm not very afraid of these people, at least not until they get computers powerful enough to brute-force AGI by simulated evolution or some other method. I'm more afraid of Shane Legg who does top-notch technical work (far beyond anything I'm capable of), understands the danger of uFAI and ranks it as the #1 existential risk, and still cheers for stuff like Monte Carlo AIXI. I'm afraid of Abram Demski who wrote brilliant comments on LW and still got paid to help design a self-improving AGI (Genifer).

Comment author: XiXiDu 09 January 2012 10:20:26AM 5 points [-]

After taking a look at the research pages, I'm not very afraid of these people...I'm afraid of Abram Demski...

It would help me a lot if you could email or pm me the names of people who you are afraid of so that I can contact them. Thank you.

email: xixidu@gmail.com or da@kruel.co

Comment author: cousin_it 09 January 2012 10:43:33AM *  8 points [-]

You could also try contacting Justin Corwin who won 24 out of 26 AI-box experiments and now develops AGI at a2i2.

Comment author: loup-vaillant 10 January 2012 04:57:29PM 3 points [-]

24 out of 26?! Since Eliezer won his first two, I was already reasonably certain that AI boxing is effectively impossible (at least once you give it the permission to talk to some humans), so I won't meaningfully update here. But this piece of evidence was quite unexpected.

Comment author: Thomas 09 January 2012 10:03:31AM 0 points [-]

Those (three) people are not in AI field, at least for my taste. But:

at least not until they get computers powerful enough to brute-force AGI by simulated evolution or some other method.

Why do you think, present computers are not fast enough for a digital evolution of X?

Comment author: cousin_it 09 January 2012 12:55:15PM *  1 point [-]

A mind designed by evolution could be big and messy, about as complex as the human brain. Right now we have no computer powerful enough to simulate even a single human brain, and evolution requires many of those. Of course there are many possible shortcuts, but we don't seem to be there yet.

Comment author: Thomas 10 January 2012 09:45:51AM *  1 point [-]

The question really is, can a program with an evolutionary algorithm in its core can do something better than a small elite of talented humans (with a help of computer programs) can?

The answer is yes, it can do it today, it can do it presently and it does.

People here on this list are mostly highly dismissive about "stupid evolution everybody can do, but it's a CPU time waster".

See!

or

All of the latter has been evolved in a digital environment with no additional expert knowledge of humans. Sooner or later, we will be evolving pretty much everything. All the big talk about the AI of some web experts aside.

Comment author: Thomas 09 January 2012 05:04:26PM *  0 points [-]

we don't seem to be there yet

When it will seem we are, we'll already be beyond of there.