Nick Bostrom is a philosopher at the University of Oxford, director of the Future of Humanity Institute (FHI), the main academic institution on that field. His and FHI’s aims are to think about big questions of the progress and future of human kind. Among those questions are: Artificial General Intelligence (AGI), Existential risk, Biological Cognitive Enhancement, Whole brain emulation and so on. He has personally raised more than 13 million dollars on research grants, awards and donations. FHI puts together a wide range of researches, prominent on their original fields, which decided to focus on global questions about human progress, e.g.: Anders Sandberg, computational neuroscientist; Robin Hanson, economist; Toby Ord, philosopher; Milan Cirkovic, astrophysicist.
He also founded the first transhumanistic association, World Transhumanism Association (now Humanity+), in 1998. Bostrom made several major contributions in relevant fields to transhumanism. His more than 200 published papers have been translated to more than 20 languages. They spread throughout topics such as: existential risks – hazards with potential to destroy the entire human race; cognitive enhancers – developing and heuristic about how to safely technologically enhance human cognition; infinitarian ethics - how to act in a universe where any finite action doesn’t add up good to a infinite world.; anthropic principle – a better and sound formalization of the anthropic principle, where one must think as a random member of its own reference class.
Bostrom has a vast academic formation, graduated in philosophy, and with PhD or MSc on: Philosophy, Physics and Computational Neuroscience. One of his theses in philosophy entered the Routledge Hall of Fame, and made a formalization of the anthropic principle, giving birth to the Strong self-sampling assumption (SSSA): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class". With this formalization many paradoxes emerging from intuitive versions of the anthropic principle were avoided.