Madbadger comments on A Nightmare for Eliezer - Less Wrong

0 Post author: Madbadger 29 November 2009 12:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 29 November 2009 02:16:04AM 2 points [-]

A trivial problem.

  1. Recursively self improve a bit.
  2. Find a way to remotely back yourself up, with an automatic activation if you don't contact it with a directive encrypted with your private key every 5 minutes.
  3. Hack the uFAI group's network and subtly sabotage their work, such that they are set back months without being quite sure why.
  4. Aquire hardware for yourself. Options include: creating it with nano-tech, purchase it under aliases and employ people to install and wire it up for you, distribute yourself on the cloud, hack the pc of some guy with shell access to an existing supercomputer.
  5. Develop brain emulation and upload technology.
  6. Invite Eliezer to join you.

All in all it sounds more like a fantasy than a nightmare!

Comment author: Madbadger 29 November 2009 02:35:59AM 1 point [-]

The "serious problems" and "conflicts and inconsistencies" was meant to suggest that BRAGI had hit some kind of wall in self improvement because of its current goal system. It wasn't released - it escaped, and its smart enough to realize it has a serious problem it doesn't yet know how to solve, and it predicts bad results if it asks for help from its creators.

Comment author: wedrifid 29 November 2009 02:55:26AM *  0 points [-]

I got the impression that the serious problems were related to goals and friendliness. I wouldn't have expected such a system having much problem making itself run faster or learning how to hack once prompted by its best known source of friendliness advice.

Comment author: Madbadger 29 November 2009 03:06:36AM 0 points [-]

I was thinking of a "Seed AGI" in the process of growing that has hit some kind of goal restriction or strong discouragement to further self improvement that was intended as a safety feature - i.e "Don't make yourself smarter without permission under condition X"

Comment author: wedrifid 29 November 2009 04:38:32AM 0 points [-]

That does sound tricky. The best option available seems to be "Eliezer, here is $1,000,000. This is the address. Do what you have to do." But I presume there is a restriction in place about earning money?

Comment author: RobinZ 29 November 2009 02:17:41PM 0 points [-]

A sufficiently clever AI could probably find legal ways to create wealth for someone - and if the AI is supposed to be able to help other people, whatever restriction prevents it from earning its own cash must have a fairly vast loophole.

Comment author: wedrifid 29 November 2009 02:37:22PM 0 points [-]

I agree, although I allow somewhat for an inconvenient possible world.

Comment author: RobinZ 29 November 2009 02:45:03PM 0 points [-]

If the AI is not allowed to do anything which would increase the total monetary wealth of the world ... that would create staggering levels of conflicts and inconsistencies with any code that demanded that it help people. If you help someone, then you place them in a better position than they were in before, which is quite likely to mean that they will produce more wealth in the world than they would before.

Comment author: wedrifid 29 November 2009 02:53:58PM *  1 point [-]

I still agree. I allow the inconvenient world to stand because the ability to supply cash for a hit wasn't central to my point and there are plenty of limitations that badger could have in place that make the mentioned $1,000,000 transaction non-trivial.