This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
is fundraising!
"Why Not Just..."
LW
$
Login
"Why Not Just..."
A compendium of rants about alignment proposals, of varying charitability.
150
Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc
Ω
johnswentworth
3y
Ω
55
157
Godzilla Strategies
Ω
johnswentworth
3y
Ω
71
98
Rant on Problem Factorization for Alignment
Ω
johnswentworth
2y
Ω
53
136
Interpretability/Tool-ness/Alignment/Corrigibility are not Composable
Ω
johnswentworth
2y
Ω
12
204
How To Go From Interpretability To Alignment: Just Retarget The Search
Ω
johnswentworth
2y
Ω
34
103
Oversight Misses 100% of Thoughts The AI Does Not Think
Ω
johnswentworth
2y
Ω
49
81
Human Mimicry Mainly Works When We’re Already Close
Ω
johnswentworth
2y
Ω
16
207
Worlds Where Iterative Design Fails
Ω
johnswentworth
2y
Ω
30
172
Why Not Just... Build Weak AI Tools For AI Alignment Research?
Ω
johnswentworth
2y
Ω
18
Review
139
Why Not Just Outsource Alignment Research To An AI?
Ω
johnswentworth
2y
Ω
49
Review
149
OpenAI Launches Superalignment Taskforce
Zvi
1y
40