This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
is fundraising!
Tags
LW
$
Login
Risks of Astronomical Suffering (S-risks)
•
Applied to
If AI starts to end the world, is suicide a good idea?
by
IlluminateReality
5mo
ago
•
Applied to
S-Risks: Fates Worse Than Extinction
by
aggliu
7mo
ago
•
Applied to
AE Studio @ SXSW: We need more AI consciousness research (and further resources)
by
Cameron Berg
8mo
ago
•
Applied to
Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do.
by
Adam Zerner
9mo
ago
•
Applied to
Old man's story
by
RomanS
1y
ago
•
Applied to
Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition
by
Adrià Moret
1y
ago
•
Applied to
Sentience Institute 2023 End of Year Summary
by
michael_dello
1y
ago
•
Applied to
Making AIs less likely to be spiteful
by
Maxime Riché
1y
ago
•
Applied to
Rosko’s Wager
by
Wuksh
2y
ago
•
Applied to
(Crosspost) Asking for online calls on AI s-risks discussions
by
jackchang110
2y
ago
•
Applied to
Briefly how I've updated since ChatGPT
by
rime
2y
ago
•
Applied to
The Security Mindset, S-Risk and Publishing Prosaic Alignment Research
by
lukemarks
2y
ago
•
Applied to
How likely are scenarios where AGI ends up overtly or de facto torturing us? How likely are scenarios where AGI prevents us from committing suicide or dying?
by
JohnGreer
2y
ago
•
Applied to
How likely do you think worse-than-extinction type fates to be?
by
span1
2y
ago