This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
is fundraising!
Tags
LW
$
Login
Aligned AI Proposals
•
Applied to
[Linkpost] Building Altruistic and Moral AI Agent with Brain-inspired Affective Empathy Mechanisms
by
Gunnar_Zarncke
2mo
ago
•
Applied to
How might we solve the alignment problem? (Part 1: Intro, summary, ontology)
by
Gunnar_Zarncke
2mo
ago
•
Applied to
AI Alignment via Slow Substrates: Early Empirical Results With StarCraft II
by
Lester Leong
2mo
ago
•
Applied to
A Nonconstructive Existence Proof of Aligned Superintelligence
by
Roko
3mo
ago
•
Applied to
Lifelogging for Alignment & Immortality
by
Dev.Errata
4mo
ago
•
Applied to
Toward a Human Hybrid Language for Enhanced Human-Machine Communication: Addressing the AI Alignment Problem
by
Andndn Dheudnd
4mo
ago
•
Applied to
aimless ace analyzes active amateur: a micro-aaaaalignment proposal
by
lemonhope
5mo
ago
•
Applied to
A "Bitter Lesson" Approach to Aligning AGI and ASI
by
RogerDearnaley
6mo
ago
•
Applied to
Slowed ASI - a possible technical strategy for alignment
by
Lester Leong
6mo
ago
•
Applied to
Why entropy means you might not have to worry as much about superintelligent AI
by
Ron J
7mo
ago
•
Applied to
How to safely use an optimizer
by
Mateusz Bagiński
9mo
ago
•
Applied to
Strong-Misalignment: Does Yudkowsky (or Christiano, or TurnTrout, or Wolfram, or…etc.) Have an Elevator Speech I’m Missing?
by
Benjamin Bourlier
9mo
ago
•
Applied to
Alignment in Thought Chains
by
Faust Nemesis
10mo
ago
•
Applied to
Update on Developing an Ethics Calculator to Align an AGI to
by
sweenesm
10mo
ago
•
Applied to
Requirements for a Basin of Attraction to Alignment
by
RogerDearnaley
11mo
ago
•
Applied to
Alignment has a Basin of Attraction: Beyond the Orthogonality Thesis
by
RogerDearnaley
11mo
ago
•
Applied to
Proposal for an AI Safety Prize
by
sweenesm
11mo
ago