Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Dande10

It’s true that we’re mostly in this situation because certain people heard about the arguments for risk and either came up with terrible solutions to them or smelled a potent fount of personal power.

A little new to the AI Alignment Field building effort, would you put head researchers at OpenAI in this category?