XiXiDu comments on Q&A with experts on risks from AI #1 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (66)
After taking a look at the research pages, I'm not very afraid of these people, at least not until they get computers powerful enough to brute-force AGI by simulated evolution or some other method. I'm more afraid of Shane Legg who does top-notch technical work (far beyond anything I'm capable of), understands the danger of uFAI and ranks it as the #1 existential risk, and still cheers for stuff like Monte Carlo AIXI. I'm afraid of Abram Demski who wrote brilliant comments on LW and still got paid to help design a self-improving AGI (Genifer).
It would help me a lot if you could email or pm me the names of people who you are afraid of so that I can contact them. Thank you.
email: xixidu@gmail.com or da@kruel.co
You could also try contacting Justin Corwin who won 24 out of 26 AI-box experiments and now develops AGI at a2i2.
24 out of 26?! Since Eliezer won his first two, I was already reasonably certain that AI boxing is effectively impossible (at least once you give it the permission to talk to some humans), so I won't meaningfully update here. But this piece of evidence was quite unexpected.