Written by Eliezer Yudkowsky last updated

Summaries

There are no custom summaries written for this page, so users will see an excerpt from the beginning of the page when hovering over links to this page. You can create up to 3 custom summaries; by default you should avoid creating more than one summary unless the subject matter benefits substantially from multiple kinds of explanation.

Nick Bostrom is, with Eliezer Yudkowsky, one of the two cofounders of the current field of value alignment theory. Bostrom published a paper singling out the problem of superintelligent values as critical in 1999, two years before Yudkowsky entered the field, which has sometimes led Yudkowsky to say that Bostrom should receive credit for inventing the Friendly AI concept. Bostrom is founder and director of the Oxford Future of Humanity Institute. He is the author of the popular book Superintelligence that currently forms the best book-length introduction to the field. Bostrom's academic background is as an analytic philosopher formerly specializing in anthropic probability theory and transhumanist ethics. Relative to Yudkowsky, Bostrom is relatively more interested in Oracle models of value alignment and in potential exotic methods of obtaining aligned goals.

Posts tagged Nick Bostrom