I'm a postdoc in differential geometry, working in pure math (not applied). The word "engineering" in a title of a forum would turn me away and lead me to suspect that the contents were far from my area of expertise. I suspect (low confidence) that many other mathematicians (in non-applied fields) would feel the same way.
A relatively small group of people with relevant mathematical backgrounds will be authorized to post on the forum, but all discussion on the site will be publicly visible to visitors.
You should note that this policy is different from the policy of perhaps the largest and most successful internet mathematics forum Mathoverflow. Maybe you have already thought about this and decided that this policy will be better. I simply wanted to make a friendly reminder that whenever you want to do things differently from the "industry leader" it is often a good idea to have a clear idea exactly why.
Why not another subforum on LW, next to Main and Discussion, say, Technical Discussion? Probably because you want to avoid the "friendliness" nomenclature, but it would be nice to find some way around that, otherwise it's yet another raison d'être of this forum being outsourced.
LW seems to have a rather mixed reputation: if you want to attract mainstream researchers, trying to separate the research forum from the weirder stuff discussed on LW seems like a good idea.
I like Intelligent Agent Foundations Forum, because I chronically overuse the word 'foundations,' and would like to see my deviancy validated. (preference ordering 6,5,1,4)
Also, I'm somewhat sad about the restricted posting model - guess I'll just have to keep spamming up Discussion :P
I'd expect MIRI to run a forum called MIRF, but it has a negative connotation on urbandictionary. How about Safety in Machine Intelligence, or SMIRF? :)
I initially parsed that as (Self-Modifying (Intelligence Research Forum)), and took it to indicate that the forum's effectively a self-modifying system with the participants' comments shifting each other's beliefs, as well as changing the forum consensus.
Good initiative!
For people who haven't read the LM/Hibbard paper, I can't imagine it would be clear why 'exploratory' is a word that should apply to this kind of research as compared to other AI research. 5-7 seem more timeless. 5 seems clearest and most direct.
Something from the tired mind of someone with no technical background:
Selective Forum for Exploratory AI Research (SFEAR)
Cool acronym, plus the "Selective" emphasizes the fact that only highly competent people would be allowed, which I imagine would be desirable for CV appearance.
MIRI has an organizational goal of putting a wider variety of mathematically proficient people in a position to advance our understanding of beneficial smarter-than-human AI.
Sure does, There remains the question of whether it should be emphasising mathematical proficiency so much. MIRI isn't very interested in people who are proficient in actual computer science, or AI, which might explain why spends a lot of time on the maths of computationally untractable systems like AIXI. MIRI isn't interested in people who are proficient in philosophy, leaving it unable to either sidestep the ethical issues that are part of AI safety, .ir to say anything very cogent about them.
What do you feel is bad about moral philosophy? It looks like you dislike it because place it next to anthropormorphic thinking and technophobia.
MIRI has an organizational goal of putting a wider variety of mathematically proficient people in a position to advance our understanding of beneficial smarter-than-human AI. The MIRIx workshops, our new research guide, and our more detailed in-the-works technical agenda are intended to further that goal.
To encourage the growth of a larger research community where people can easily collaborate and get up to speed on each other's new ideas, we're also going to roll out an online discussion forum that's specifically focused on resolving technical problems in Friendly AI. MIRI researchers and other interested parties will be able to have more open exchanges there, and get rapid feedback on their ideas and drafts. A relatively small group of people with relevant mathematical backgrounds will be authorized to post on the forum, but all discussion on the site will be publicly visible to visitors.
Topics will run the gamut from logical uncertainty in formal agents to cognitive models of concept generation. The exact range of discussion topics is likely to evolve over time as researchers' priorities change and new researchers join the forum.
We're currently tossing around possible names for the forum, and I wanted to solicit LessWrong's input, since you've been helpful here in the past. (We're also getting input from non-LW mathematicians and computer scientists.) We want to know how confusing, apt, etc. you perceive these variants on 'forum for doing exploratory engineering research in AI' to be:
1. AI Exploratory Research Forum (AIXRF)
2. Forum for Exploratory Engineering in AI (FEEAI)
3. Forum for Exploratory Research in AI (FERAI, or FXRAI)
4. Exploratory AI Research Forum (XAIRF, or EAIRF)
We're also looking at other name possibilities, including:
5. AI Foundations Forum (AIFF)
6. Intelligent Agent Foundations Forum (IAFF)
7. Reflective Agents Research Forum (RARF)
We're trying to avoid names like "friendly" and "normative" that could reinforce someone's impression that we think of AI risk in anthropomorphic terms, that we're AI-hating technophobes, or that we're moral philosophers.
Feedback on the above ideas is welcome, as are new ideas. Feel free to post separate ideas in separate comments, so they can be upvoted individually. We're especially looking for feedback along the lines of: 'I'm a grad student in theoretical computer science and I feel that the name [X] would look bad in a comp sci bibliography or C.V.' or 'I'm friends with a lot of topologists, and I'm pretty sure they'd find the name [Y] unobjectionable and mildly intriguing; I don't know how well that generalizes to mathematical logicians.'