Today's post, Billion Dollar Bots was originally published on November 22, 2008. A summary:

 

An alternate scenario for the creation of bots, this time involving lots of cloud computing.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Brain Emulation and Hard Takeoff, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
3 comments, sorted by Click to highlight new comments since:

This post is by James Miller, who posted about a year ago that he was writing a book. It's apparently out now, and seems to have received some endorsements from some recognizable figures. If there's anyone here who's read it, how worthwhile of a read would it be for someone already familiar with the idea of the singularity?

I'd recommend it. It's not exactly Earth-shattering, but it made a number of interesting points which I hadn't encountered before, such as pointing out that the mere possibility of a Singularity could by itself be an existential risk if people took it seriously enough. For example, if a major nuclear power thought - correctly or incorrectly - that a hostile country was about to build an AI capable of undergoing a hard takeoff, they could use nuclear weapons against the other country to prevent the completion of the AI, even at the risk of also causing World War III in the process. I also thought the discussion of sexbots was interesting, among other things.