You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ArisKatsaris comments on December 2015 Media Thread - Less Wrong Discussion

4 Post author: ArisKatsaris 01 December 2015 09:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread.

Comment author: ArisKatsaris 01 December 2015 09:36:40PM 0 points [-]

Nonfiction Books Thread

Comment author: moridinamael 01 December 2015 09:58:45PM *  3 points [-]

I wrote a review of Superintelligence: Paths, Dangers, Strategies. It's also of an essay about the nature of the halo effect on how ideas are perceived.

Comment author: Artaxerxes 02 December 2015 04:44:29PM 1 point [-]

This interlude is included despite the fact that Hanson’s proposed scenario is in contradiction to the main thrust of Bostrom’s argument, namely, that the real threat is rapidly self-improving A.I.

I can't say I agree with your reasoning behind why Hanson's ideas are in the book. I think the book's content is written with accuracy in mind first and foremost, and I think Hanson's ideas are there because Bostrom thinks they're genuinely a plausible direction the future could go, especially in the circumstances where recursive self improving AI of the kinds traditionally envisioned turns out to be unlikely or difficult or impossible for whatever reasons. I don't think those ideas are there in an effort to mine the Halo effect.

And really, the book's main thrust is in the title. Paths, Dangers, Strategies. Even if these outcomes are not necessarily mutually exclusive (inc. the possibility of singletons forming out of initially multi-polar outcomes as discussed in p.176 onwards), talking about potential pathways is very obviously relevant, I would have thought.

Comment author: moridinamael 02 December 2015 08:34:01PM 2 points [-]

I think that we are both right.

Hypothetically, if there were some famous university professor who had written at length about the possibility of, I dunno, simulated superintelligent ant hives, then I think that Bostrom might have felt compelled to include a discussion of the "superintelligent ant hive hypothesis" in his book. He's striving for completeness, at least in terms of his coverage of high-level aspects of the A.I. Risk landscape. It would also be a huge slight to the theory's originator if he left out any reference to the "superintelligent ant hive hypothesis". And finally, Bostrom probably doesn't want to place himself in the position of arbiter of which ideas get to be taken seriously, when lots of people probably think of lots of parts of A.I. Risk as loony already.

So, I don't think Bostrom was sitting in his office plotting how to make his book a weaponized credulity meme. But I also felt, from my own perspective, that the inclusion of the Hanson stuff was just a bit forced.

Comment author: Artaxerxes 02 December 2015 08:41:43PM *  1 point [-]

Yeah, I pretty much agree, but the important point to make is that any superintelligent ant hive hypotheses would have to be at least as plausible and relevant to the topic of the book as Hanson's ems to make it in. Note Bostrom dismisses brain-computer interfaces as a superintelligence pathway fairly quickly.

Comment author: gwern 02 December 2015 10:07:37PM 2 points [-]
Comment author: Romashka 19 December 2015 03:59:11PM 0 points [-]

M. Atwater, The avalanche hunters. Philadelphia, Macrea Smith Co., 1968. (Russian translation: М. Отуотер, Охотники за лавинами. Изд. 2-е. - М., "Мир". - 1980). A wonderful memoir, reminds a bit (in spirit, not style) Kipling's The Head of the District and The Bridge-Builders. Contains examples of real-life problems - risking many lives to save one - with a consequentialist moral.