This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Wikitags
LW
Login
Discussion
0
You are viewing version 1.5.0 of this page. Click here to view the latest version.
Researchers in value alignment theory
Discussion
0
Written by
Eliezer Yudkowsky
,
paulfchristiano
,
et al.
last updated
23rd Feb 2016
You are viewing revision 1.5.0, last edited by
Malo
This page lists researchers in
AI alignment
.
Eliezer Yudkowsky
(founder,
MIRI
)
Nick Bostrom
(founder,
FHI
)
3q
(MIRI;
parametric polymorphism
, the
Procrastination Paradox
, and numerous other developments in
Vingean reflection
.)
Orthonormal
(MIRI;
modal agents
)
StuartArmstrong
(FHI;
Utility indifference
)
Paulfchristiano
(UC
Berkeley,
approval-directed agents
, previously proposed a formalization of
indirect normativity
)
StuartRussell
(UC Berkeley; author of Artificial Intelligence: A Modern Approach; previously published on theories of reflective optimality; currently interested in
inverse reinforcement learning
.)
Jessicat
(MIRI,
reflective oracles
)
Andrew Critch
(MIRI)
ScottGarabant
(MIRI,
logical probabilities
)
So8res
(previously MIRI researcher, now Executive Director at MIRI)
Parents:
AI alignment
Children:
Nick Bostrom