This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Why Not Try Build Safe AGI?
LW
Login
Why Not Try Build Safe AGI?
Copy-pasting from my one-on-ones with AI Safety researchers:
-3
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
2y
9
6
List #1: Why stopping the development of AGI is hard but doable
Remmelt
2y
11
1
List #2: Why coordinating to align as humans to not develop AGI is a lot easier than, well... coordinating as humans with AGI coordinating to be aligned with humans
Remmelt
2y
0
4
List #3: Why not to assume on prior that AGI-alignment workarounds are available
Remmelt
2y
1