This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Wikitags
LW
Login
Sharp Left Turn
Settings
Applied to
Superintelligence's goals are likely to be random
by
Mikhail Samin
2mo
ago
Applied to
Moral gauge theory: A speculative suggestion for AI alignment
by
James Diacoumis
2mo
ago
Applied to
“Sharp Left Turn” discourse: An opinionated review
by
Steven Byrnes
3mo
ago
Dakara
v1.1.0
Dec 30th 2024 GMT
(-2)
1
Applied to
Agency overhang as a proxy for Sharp left turn
by
Eris
6mo
ago
Applied to
Has Eliezer publicly and satisfactorily responded to attempted rebuttals of the analogy to evolution?
by
kaler
9mo
ago
Applied to
Response to Quintin Pope's Evolution Provides No Evidence For the Sharp Left Turn
by
Mateusz Bagiński
1y
ago
Applied to
A simple treacherous turn demonstration
by
Nikola Jurkovic
1y
ago
Applied to
[Interview w/ Quintin Pope] Evolution, values, and AI Safety
by
RobertM
2y
ago
Applied to
Evolution Solved Alignment (what sharp left turn?)
by
MondSemmel
2y
ago
Applied to
We don't understand what happened with culture enough
by
Jan_Kulveit
2y
ago
Applied to
A few Alignment questions: utility optimizers, SLT, sharp left turn and identifiability
by
Bird Concept
2y
ago
Applied to
The Sharp Right Turn: sudden deceptive alignment as a convergent goal
by
avturchin
2y
ago
Applied to
Evolution provides no evidence for the sharp left turn
by
Quintin Pope
2y
ago
Applied to
A smart enough LLM might be deadly simply if you run it for long enough
by
Mikhail Samin
2y
ago
Applied to
Reframing inner alignment
by
Vika
2y
ago
Applied to
Victoria Krakovna on AGI Ruin, The Sharp Left Turn and Paradigms of AI Alignment
by
Raemon
2y
ago
Applied to
How is the "sharp left turn defined"?
by
Raemon
2y
ago