I don't think this situation can really be described as a trick.
The way I see it, the main services publishers provide are distribution, marketing, and to a lesser extent editing. Self-publish or go with a vanity publisher, and you're going to have a harder time getting into bookstores or other content distributors, because you haven't gone through their filters but also because you're not playing the usual game. But that just means you need to establish the book's worth yourself. The typical reader won't be able to tell the difference, but in order to get your book to the typical reader, you need to jump through a lot of hoops that are more or less equivalent to what a publisher would be doing for you. And popularity of course is a vindication all its own (there have been successful self-published books, albeit not many).
Now, if the question was whether it's ethical to claim the status you'd get from being picked up by a major publisher ("I'm a published author!"), then I'd be right there with you. But I don't think that having a vanity-published book in the wild, or even pointing people to it, is equivalent to making that claim.
We're pleased to announce the release of "Smarter Than Us: The Rise of Machine Intelligence", commissioned by MIRI and written by Oxford University’s Stuart Armstrong, and available in EPUB, MOBI, PDF, and from the Amazon and Apple ebook stores.
Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives—but only if we’re able to precisely define what a “good” world is, and skilled enough to describe it perfectly to a computer program.
AIs, like computers, will do what we say—which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an AI: explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are fragile: in some cases, if we mis-define a single piece of the puzzle—say, consciousness—we end up with roughly 0% of the value we intended to reap, instead of 99% of the value.
Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?
Special thanks to all those at the FHI, MIRI and Less Wrong who helped with this work, and those who voted on the name!