You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on Stupid Questions Open Thread Round 4 - Less Wrong Discussion

6 Post author: lukeprog 27 August 2012 12:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

You are viewing a single comment's thread. Show more comments above.

Comment author: Xachariah 28 August 2012 05:44:18AM 1 point [-]

Thank you for the links, they were exactly what I was looking for.

As for friendly upload FOOMs, I consider the chance of them happening at random about equivalent to FIA happening at random.

Comment author: Wei_Dai 28 August 2012 07:33:49AM 4 points [-]

As for friendly upload FOOMs, I consider the chance of them happening at random about equivalent to FIA happening at random.

(I guess "FIA" is a typo for "FAI"?) Why talk about "at random" if we are considering which technology to pursue as the best way to achieve a positive Singularity? From what I can tell, the dangers involved in an upload-based FOOM are limited and foreseeable, and we at least have ideas to solve all of them:

  1. unfriendly values in scanned subject (pick the subject carefully)
  2. inaccurate scanning/modeling (do a lot of testing before running upload at human/superhuman speeds)
  3. value change as a function of subjective time (periodic reset)
  4. value change due to competitive evolution (take over the world and form a singleton)
  5. value change due to self-modification (after forming a singleton, research self-modification and other potentially dangerous technologies such as FAI thoroughly before attempting to apply them)

Whereas FAI could fail in a dangerous way as a result of incorrectly solving one of many philosophical and technical problems (a large portion of which we are still thoroughly confused about) or due to some seemingly innocuous but erroneous design assumption whose danger is hard to foresee.

Comment author: Benja 31 August 2012 09:27:00AM *  0 points [-]

Wei, do you assume uploading capability would stay local for long stretches of subjective time? If yes, why? (WBE seems to require large-scale technological development, which I'd expect to be fueled by many institutions buying the tech and thus fueling progress -- compare genome sequencing -- so I'd expect multiple places to have the same currently-most-advanced systems at any point in time, or at least being close to the bleeding edge.) If no, why expect the uploads that go FOOM first to be ones that work hard to improve chances of friendliness, rather than primarily working hard to be the first to FOOM?

Comment author: Wei_Dai 31 August 2012 05:40:20PM 0 points [-]

Wei, do you assume uploading capability would stay local for long stretches of subjective time?

No, but there are ways for this to happen that seem more plausible to me than what's needed for FAI to be successful, such as a Manhattan-style project by a major government that recognizes the benefits of obtaining a large lead in uploading technology.

Comment author: Benja 31 August 2012 07:28:25PM 0 points [-]

Ok, thanks for clarifying!