thomblake comments on Changing accepted public opinion and Skynet - Less Wrong

15 [deleted] 22 May 2009 11:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 22 May 2009 02:55:25PM 1 point [-]

Seems plausible. However under this model, there's still room for self-improvement using something like genetic algorithms; that is, it could make small, random tweaks, but find and implement the best ones in much less time than we could possibly do with humans. Then it could still be recursively self-improving.

A lot of us think this scenario is much more likely. Mostly those on the side of Chaos in a particular Grand Narrative. Plug for The Future and its Enemies - arguably one of the most important works in political philosophy from the 20th century.

Comment author: whpearson 22 May 2009 03:30:59PM 0 points [-]

That is much weaker than the type of RSI that is supposed to cause FOOM. For one you are only altering software not hardware, and secondly I don't think a system that replaces itself with a random variation, even if it has been tested, will necessarily be better, if it doesn't understand itself. Random alterations, may cause madness, introduce bugs or other problems a long time after the change.

Comment author: thomblake 22 May 2009 03:38:01PM 0 points [-]

Random alterations, may cause madness, introduce bugs or other problems a long time after the change.

Note: Deliberate alterations may cause madness or introduce bugs or other problems a long time after the change.

Comment author: whpearson 22 May 2009 03:56:36PM 1 point [-]

The idea with Eliezer style RSI is formally proved good alterations.