jacob_cannell comments on [Link] Introducing OpenAI - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (48)
With Sam Altman (CEO of YCombinator) talking so much about AI safety and risk over the last 2-3 months, I was so sure that he was working out a deal to fund MIRI. I wonder why they decided to create their own non-profit instead.
Although on second thought, they're aiming for different goals. While MIRI is focused on safety once strong AI occurs, OpenAI is trying to actually speed up the research of strong AI.
In practice MIRI is more think-tank than research organization. AFAIK MIRI doesn't even yet claim to have a clear research agenda that leads to practical safe AGI. Their research is more abstract/theoretical/pie in the sky and much harder to measure. Given that numerous AI safety think tanks already now exist, creating a new actual research org non-profit makes sense - it fills in an empty niche. Creating a fresh structure gives the organizers/founders more control and allows them to staff it with researchers they believe in.