Interesting exercise in AI-adjacent forecasting area (brain-computer interfaces). Curious if people want to specify some possible reveals+probabilities. https://twitter.com/neuralink/status/1149133717048188929 (if in the somewhat likely scenario you're relying on inside info please mention it)
As LW is an intellectual **community** where people get to know each other and know "who knows what, who's working on what" it seems like something like tagging people FB-style in specific posts/comments is a very low-hanging fruit. (Of course you should be able to silence people from tagging you...
Interesting case study making rounds in ML social media, supporting the thesis that rationality techniques are useful in doing actual ML research. Several implicit references and explicit reference to CFAR at the bottom
Interesting Talking Machines episode quote about Bayesian stats being used at Bletchley and GCHQ (its successor). Seems like they held on to a possibly significant advantage (crypto ppl would be better to comment on this) for years, owing largely to Turing. (The rest of the episode is about AI safety...
http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/montreal-institute-learning-algorithms-ai-safety-research The really good news is that Yoshua Bengio is leading this (he is extremely credible in modern AI/deep learning world), and this is a pretty large change of mind for him. When I spoke to him at a conference 3 years ago he was pretty dismissive of the whole...
There is a fairly large contingent of safety research oriented people at NIPS this year. I'm unfortunately not among them, but if you're there and interested in connecting with others on AI safety topics or other LW issues - general rationality, EA, etc. I welcome you to make this thread...