GeraldMonroe comments on Dragon Ball's Hyperbolic Time Chamber - Less Wrong

35 Post author: gwern 02 September 2012 11:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (62)

You are viewing a single comment's thread. Show more comments above.

Comment author: GeraldMonroe 03 September 2012 05:59:59PM *  3 points [-]

What stops you from making a change that is addictive or self-amplifying? For example, suppose a subtle tweak makes you less averse to making another subtle tweak in the same direction. A few thousand iterations later and your network is trashed. http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/

It seems to me that the only safe way to do this would be to only permit other uploaded entities to make the edits, working in teams, with careful observation and testing of results. Older versions of yourself might be team members.

Also, the hardware design would need to be extremely well thought out, so that it is not possible for someone to Blue Pill attack you without your knowledge, or directly overwrite your neural structures with someone else's patterns. The hardware would have to be designed with security permissions inherently baked in : here's a blog post where Drexler discusses this :

http://metamodern.com/2011/08/03/quiz-question-what-is-wrong-with-this-model-of-computation/