TheAncientGeek comments on A forum for researchers to publicly discuss safety issues in advanced AI - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (73)
Sure does, There remains the question of whether it should be emphasising mathematical proficiency so much. MIRI isn't very interested in people who are proficient in actual computer science, or AI, which might explain why spends a lot of time on the maths of computationally untractable systems like AIXI. MIRI isn't interested in people who are proficient in philosophy, leaving it unable to either sidestep the ethical issues that are part of AI safety, .ir to say anything very cogent about them.
My background is in philosophy, and I agree with MIRI's decision to focus on more technical questions. Luke explains MIRI's perspective in From Philosophy to Mathematics to Engineering. Friendly AI work is currently somewhere in between 'philosophy' and 'mathematics', and if we can move more of it into mathematics (by formalizing more of the intuitive problems and unknowns surrounding AGI), it will be much easier to get the AI and larger computer science community talking about these issues.
People who work for and with MIRI have a good mix of backgrounds in mathematics, computer science, and philosophy. You don't have to be a professional mathematician to contribute to a workshop or to the research forum; but you do need to be able to communicate and innovate concisely and precisely, and 'mathematics' is the name we use for concision and precision at its most general. A lot of good contemporary philosophy also relies heavily on mathematics and logic.