David Pearce was still taking questions as of an hour ago. He gave me a much more thorough answer to my question than I have gotten in the two other AMA's I submitted questions to. Neil Strauss and the Atlantic writer whose name slips me at the moment both gave me terrible drive by answers with about two seconds of thought behind them.
Interesting point there, crystallises a lot of thoughts I've had recently.
First, many apologies Vaniver if you didn't find any of the AMA topics of interest. But I did also answer - or at least attempt to answer! - questions on Third World poverty; the reproductive revolution; nootropics; scientifically measuring (un)happiness; cognitive bias; brain augmentation; cryonics; empathy enhancement; polyamory; the (alleged) arrogance of transhumanism; Hugo De Garis; Aubrey de Grey; my contribution to Springer's forthcoming Singularity volume; "hell worlds" and "mangled worlds" in QM; classical versus negative utilitarianism; and meta-ethical anti-realism. If there is an unjustly neglected topic you'd like to see tackled, then I promise I'll respond to the best of my ability.
Psychedelics? The drug-naive may dismiss their intellectual significance on a priori grounds - and one may abstain altogether on grounds of prudence and/or legality. But will an understanding of consciousness yield to rational philosophical analysis alone, or only to a combination of theory harnessed to empirical methods of investigation? Maybe the former; but I think a defensible case can be made that rigorous experimentation is vital, despite the methodological pitfalls.
Nonhuman animals? Well, the plight of sentient but cognitively humble creatures might seem of limited interest to a community of rationalists like lesswrong. But let's assume that we take the problem of [human-] Friendly AI seriously. Isn't the fate of sentient but comparatively cognitively humble creatures in relation to vastly superior intelligence precisely the issue at stake? Should superintelligence(s) care about the well-being of their intellectual inferiors any more than most humans do? And if so, why?
I apologize for the hyperbole; entirely was not the word I should have used. As for questions I had, I thank you for the opportunity to ask them, but if I had specific ones I would have asked them there, rather than complaining someone else.
I am not personally interested in the question of consciousness, and so while I can appreciate their potential scientific value I don't feel the fascination that the subject has for many.
I'm also not very interested in moral issues phrased as plights, though I know others here are. The world arranges itself by the bottom up, not the top down. (I am planning to switch from chicken to algae, but for health and related reasons rather than moral ones.)
The only part of the world to which one has direct, non-inferential access is the contents of one's own conscious mind. Abundant evidence from experimental psychology suggests we often confabulate even here. Even if one is uninterested in the topic of consciousness per se, I think it's worth investigating how the properties of the medium infect the supposed propositional content of what one is saying. Thus while in the altered state of consciousness known as dreaming, for instance, a scientific rationalist may make all sort of cognitive errors; but the nature of the cardinal error is (normally) elusive. So how will posthuman superintelligence regard what humans mostly take for granted, namely "ordinary waking consciousness" - the state of consciousness in which we pursue the enterprise of scientific knowledge? Or as Einstein put it more poetically, "What does the fish know of the sea in which it swims?
"Plight"? Perhaps I should have used a more cumbersome but emotionally neutral synonym: a difficult, subjectively distressing or dangerous situation. I wasn't intending to add "plights" to our ontology of the world!
Transhumanist philosopher David Pearce co-founded Humanity+ with Nick Bostrom.
He is currently answering questions in an AMA on reddit/r/transhumanism.