If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
As I read through the Agenda, I can hear Anna Salamon telling me something along the lines of: if you think something is a rational course of action, the antecedents to that course must neccersarily be rational or you are wrong. She doesn't explain it like that and I cant first that poplar thread but whatever...
Now reviewing the research agenda, there are some things which concern me about their way of doing problem solving. I'd appreciate anyone's input, challenges, clarification and additions:
nice sound bite. No quarrel with this. Just wanted to point it out
for the same reason, I won't delegate trust to design friendly AI up to strangers at MIRI alone ;)
this is the critical assumption behind MIRI's approach. Is there any reason to believe this is the case?
shouldn't establishing this be the very first item in the research agenda, before jumping in to problems they assume are solveable. In fact, the abscence of evidence for them being solveable should be evidence of absence...no?
has it been demonstrated anywhere that formalisms are optimal for exception handling?
Is this a legitimate forced choice between pure mathematics and gut level intuition + testing?
MIRI alleges a formal understanding is neccersary for robust AI control, then defines formality as follows:
So first, why aren't they disproving Rice's theorem?
Okay, show me some data from a very well designed experimenting suggesting theory should come first for the safe development of technology
Honestly, all the MIRI maths and formal logic fetishism got me impressed and awe struck. But I feel like their methodological integrity isn't tight. I reckon they need some quality statisticians and experiment designers to step in. On the other hand, MIRI operates a very very good ship. They market well, fundraise well, movement build well, community build well, they design well, they write okay now (but not in the past!), they get shit done even and they bring together very very good abstract reasoners. And, they have been instrumental, through LessWrong, in turning my life around.
In good faith, Clarity, still trying to be the in-house red team and failing slightly less at it one post at a time.
Lots of this going on in the big wide world. Consider looking in more places to deal with selection bias issues.