This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Wikitags
LW
Login
Adaptation Executors
Settings
Applied to
7. Evolution and Ethics
by
RogerDearnaley
1y
ago
Applied to
Motivating Alignment of LLM-Powered Agents: Easy for AGI, Hard for ASI?
by
RogerDearnaley
1y
ago
Applied to
Satisficers want to become maximisers
by
JenniferRM
2y
ago
Applied to
Not another bias!
by
Lionel
2y
ago
Applied to
self-improvement-executors are not goal-maximizers
by
bhauth
2y
ago
Applied to
Could evolution produce something truly aligned with its own optimization standards? What would an answer to this mean for AI alignment?
by
No77e
2y
ago
aag
v1.25.0
Oct 10th 2022 GMT
(
+23
/
-25
)
1
aag
v1.24.0
Oct 10th 2022 GMT
(+2)
1
Applied to
Deprecated: Some humans are fitness maximizers
by
Shoshannah Tekofsky
3y
ago
Applied to
Humans aren't fitness maximizers
by
Multicore
3y
ago
Applied to
Deliberation Everywhere: Simple Examples
by
Oliver Sourbut
3y
ago
[anonymous]
v1.23.0
Mar 23rd 2022 GMT
(-5)
1
[anonymous]
v1.22.0
Mar 23rd 2022 GMT
(+5)
1
Applied to
Motivations, Natural Selection, and Curriculum Engineering
by
Oliver Sourbut
3y
ago
Applied to
Some real examples of gradient hacking
by
Oliver Sourbut
3y
ago
Applied to
Fisherian Runaway as a decision-theoretic problem
by
Bunthut
4y
ago
Applied to
Representative democracy awesomeness hypothesis
4y
ago
Applied to
If it looks like utility maximizer and quacks like utility maximizer...
4y
ago
Applied to
Religion as Goodhart
by
Multicore
5y
ago
Applied to
Cynicism in Ev-Psych (and Econ?)
by
Multicore
5y
ago