CFAR should explicitly focus on AI safety

Discuss the wikitag on this page. Here is the place to ask questions and propose changes.
New Comment
7 comments, sorted by

Along with "Growing EA is net-positive", anything with a large search space + value judgment seems like it's going to have this issue.

Addressing the post, a focus on AI risk feels like something worth experimenting with.

My lame model suggests that the main downside is that it risks the brand. If so, experimenting with AI risk in the CFAR context seems like a potentially high value avenue of exploration, and brand damage can be mitigated.

For example, if it turned out to be toxic for the CFAR brand, the same group of people could spin off a new program called something else, and people may not remember or care that it was the old CFAR folks.

I want a wrong question button!! :/

I'd be interested to know if you find yourself having that feeling a lot, while interacting with claims.

If it's a small minority of the time, I think the solution is a "wrong question" button. If it happens a lot, we might need another object type --something like a prompt-for-discussion rather than a claim-to-be-agreed with.

Uh, well, it's hard to reply-to, or something? Like, it wants to jam the conversation into questions about whether the claim is "true" or "false", instead of on questions about what is meant by it or what 3rd alternatives might be available or something?

In other words, promoting this claim as worded, is misleading?

CFAR should be about "Rationality for its own sake, for the sake of existential risk". Which is totally different. I just, um, haven't figured out how to say the actual thing clearly. Help very welcome.