RobbBB comments on Open thread, Aug. 17 - Aug. 23, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (106)
Eliezer is the only staff we still have around from 2010, and I'm not sure what he'd say his biggest updates have been. I believe he's shifted significantly in the direction of thinking that the best option is to develop AI that's high-capability and safe but has limited power and autonomy (e.g., Bostrom's 'genie AI' as opposed to 'sovereign AI'), which is interesting.
I came on at the end of 2013, so I've observed that MIRI staff were very surprised by how quickly people started taking AI more seriously and discussing it more publicy over the last year -- how positive the reception to Superintelligence was, how successful the FLI conference was, etc. Also, I know that Nate now assigns moderate probability to the development of smarter-than-human AI systems being an event that plays out on the international stage, rather than taking most of the world by surprise.
Nate also mentioned on the EA Forum that Luke learned (and passed on to him) a number of lessons from SIAI's old mistakes:
Aside from the impact of FLI etc., I'd guess MIRI's median beliefs have changed at least as much due to our staff changing as due to updates by individuals. Some new staff have longer AI timelines than Eliezer, assign higher probability to multipolar outcomes, etc. (I think Eliezer's timelines lengthened too, but I could be wrong there.)