This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
is fundraising!
Books of LessWrong
LW
$
Login
Alignment & Agency
155
An Orthodox Case Against Utility Functions
Ω
abramdemski
5y
Ω
65
128
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
Ω
johnswentworth
4y
Ω
49
174
Alignment By Default
Ω
johnswentworth
4y
Ω
96
213
An overview of 11 proposals for building safe advanced AI
Ω
evhub
5y
Ω
36
247
The ground of optimization
Ω
Alex Flint
4y
Ω
80
108
Search versus design
Ω
Alex Flint
4y
Ω
40
181
Inner Alignment: Explain like I'm 12 Edition
Ω
Rafael Harth
4y
Ω
47
83
Inaccessible information
Ω
paulfchristiano
5y
Ω
17
128
AGI safety from first principles: Introduction
Ω
Richard_Ngo
4y
Ω
18
294
Is Success the Enemy of Freedom? (Full)
alkjash
4y
69