whpearson comments on Open Thread: December 2009 - Less Wrong

3 Post author: CannibalSmith 01 December 2009 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (263)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 03 December 2009 11:36:17AM *  2 points [-]

The problem with the specific scenario given, with experimental modification/duplication, rather than careful proof based modification, is that is liable to have the same problem that we have with creating systems this way. The copies might not do what the agent that created them want.

Which could lead to a splintering of the AI, and in-fighting over computational resources.

It also makes the standard assumptions that AI will be implemented on and stable on the von Neumann style computing architecture.

Comment author: Nick_Tarleton 03 December 2009 05:51:58PM 0 points [-]

It also makes the standard assumptions that AI will be implemented on and stable on the von Neumann style computing architecture.

Of course, if it's not, it could port itself to such if doing so is advantageous.

Comment author: whpearson 04 December 2009 12:40:17AM *  0 points [-]

Would you agree that one possible route to uFAI is human inspired?

Human inspired systems might have the same or similar high fallibility rate (from emulating neurons, or just random experimentation at some level) as humans and giving it access to its own machine code and low-level memory would not be a good idea. Most changes are likely to be bad.

So if an AI did manage to port its code, it would have to find some way of preventing/discouraging the copied AI in the x86 based arch from playing with the ultimate mind expanding/destroying drug that is machine code modification. This is what I meant about stability.

Comment author: [deleted] 06 December 2009 05:56:16AM *  0 points [-]

Er, I can't really give a better rebuttal than this: http://www.singinst.org/upload/LOGI//levels/code.html

Comment author: whpearson 06 December 2009 09:54:35AM 0 points [-]

What point are you rebutting?

Comment author: [deleted] 09 December 2009 03:06:43AM 0 points [-]

The idea that a greater portion of possible changes to a human-style mind are bad than changes of a equal magnitude to a Von Neumann-style mind.

Comment author: whpearson 09 December 2009 09:48:46AM 0 points [-]

Most random changes to a von Neumann-style mind would be bad as well.

Just a von-Neumann-style mind is unlikely to make the random mistakes that we can do, or at least that is Eliezer's contention.

Comment author: [deleted] 10 December 2009 04:07:16AM 0 points [-]

I can't wait until there are uploads around to make questions like this empirical.