Nick Bostrom is a philosopher at the University of Oxford, director of the Future of Humanity Institute (FHI), the main academic institution on that field. As a director he coordinates and conducts researches on crucial points to the progress and future of humanity. Among those points are: Artificial General Intelligence (AGI), Existential risk, Biological Cognitive Enhancement and Whole brain emulation. He has personally raised more than 13 million dollars on research grants, awards and donations.
He also founded the first transhumanistic association, World Transhumanism Association (now Humanity+), in 1998. Bostrom made several major contributions in relevant fields to transhumanism. His more than 200 published papers have been translated to more than 20 languages. They spread throughout topics such as:
* Existential Risks – hazards with potential to destroy the entire human race, a concept he was the first to
define
, give attention to its
large
ethical
relevance
and untangle its
particular
difficulties
.
* Cognitive Enhancers – developing and
heuristic
about how to safely technologically enhance human cognition.
*
Infinitarian
Ethics
- how to act in a universe where any finite action doesn’t add up good to a infinite world.
*
Anthropic
principle
– a better and sound formalization of the anthropic principle, where one must think as a random member of its own reference class.
Bostrom is graduated in philosophy, and with PhD or MSc on: Philosophy, Physics and Computational Neuroscience. One of his theses in philosophy entered the Routledge Hall of Fame, and made a formalization of the anthropic principle, giving birth to the Strong self-sampling assumption (SSSA): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class". With this formalization many paradoxes emerging from intuitive versions of the anthropic principle were avoided. Later, the kind of reasoning developed in his thesis lead to many other important insights, as to unveil the difficulties in accessing existential risks and the Simulation Argument. The later consisting in a carefully constructed argument demonstrating it is extremely more likely than presupposed that we are living inside a computer simulation.