Curiouskid comments on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (73)
This would imply that CFAR should be pitching its workshops to academics and government policymakers. Not to be a dick, but the latest local-mobile-social app-kerjigger is not intensive cognitive labor with a high impact on the world. Actual scientific research and public policy-making are (or, at least, scientific research is fairly intensive cognitive labor... I wouldn't necessary say it has a high mean impact on any per-unit basis).
I would hope so! But what information indicates CFAR does this?
That's good, but I worry that it doesn't go far enough. The issue is not that we're failing to teach probability theory to kindergartners -- they don't need it and don't want it. The issue is that our society allows people to walk around thinking that there isn't actually an external world to which their actions will be held accountable at all, and that subjective feeling both governs reality and normatively dictates correct actions.
To make an offensive political quip: there is the assertion-based community, and the reality-based community; too many people belong to the former and not nearly enough to the latter. The biggest impact we can have on "raising the sanity waterline" is to move people from the group who believe in a Fideist Theory of Truth ("Things are true by virtue of how I feel about them") to people who believe in the Correspondence Theory of Truth ("Things are true when they match the world outside my head!"), which also thus inspires people to listen to educated domain experts at all.
To give a flagrantly stupid example, we really really really don't want society's way of dealing with the Friendly AI problem determined by people who believe that AIs have souls and would never harm anyone because they don't have original sin. Giving Silicon Valley executives effectiveness workshops will not avert this problem, while teaching the broad public the very basic worldview that the universe is lawful, rather than consciously optimizing for recognizably humanoid goals, is likely to affect this problem.
My understanding is that CFAR is attended by both present and likely future academics; I don't know about government policymakers. (I've met people on national advisory boards from at least two countries at CFAR workshops, but I don't pretend to know how much influence they have on those boards, or how much influence those boards have on actual policy.)
At time of writing this comment, there are 14 startups listed in the post. What number of them would you consider local-mobile-social apps? (This seems to be an example of "not to be X" signifying "I am aware this is being an X but would like to avoid paying the relevant penalty.")
I have always gotten the impression from them that they want to be as cause agnostic as is reasonable, but I can't speak to their probability estimates over time and thus how they've updated.
Are there people working on a reproducible system to help people make this move? It's not at all obvious to me that this would be the comparative advantage of the people at CFAR. (Though it seems to me that much of the CFAR material is helping people finish making that transition, or, at least, get further along it.)