Comment author: Madbadger 21 April 2010 12:55:40AM 5 points [-]

Hi! 8-)

Comment author: Madbadger 21 December 2009 05:39:05AM 1 point [-]

Here is an example of an amusing "Fast and Frugal" heuristic for evaluating claims with a lot of missing knowledge and required computation: http://xkcd.com/678/

Comment author: CronoDAS 20 December 2009 09:03:43PM 6 points [-]

I decided what college to go to by rolling a die. ;)

Comment author: Madbadger 20 December 2009 09:14:07PM 1 point [-]

Yeah, sometimes you don't get the tools and information you need to make the best decision until after you've made it. 8-)

Comment author: Madbadger 20 December 2009 08:51:15PM 5 points [-]

It is worth remembering that human computation is a limited resource - we just don't have the ability to subject everything to Bayesian analysis. So, save our best rationality for what's important, and use heuristics to decide what kind of chips to buy at the grocery store.

Comment author: Madbadger 05 December 2009 02:39:25PM 0 points [-]

See also "How to lie with statistics" , an oldie but goodie

http://www.amazon.com/How-Lie-Statistics-Darrell-Huff/dp/0393310728

Comment author: Eliezer_Yudkowsky 29 November 2009 04:30:33AM *  7 points [-]

Hoax. There are no "AIs trying to be Friendly" with clueless creators. FAI is hard and http://lesswrong.com/lw/y3/value_is_fragile/.

Added: To arrive in an epistemic state where you are uncertain about your own utility function, but have some idea of which queries you need to perform against reality to resolve that uncertainty, and moreover, believe that these queries involve talking to Eliezer Yudkowsky, requires a quite specific and extraordinary initial state - one that meddling dabblers would be rather hard-pressed to accidentally infuse into their poorly designed AI.

Comment author: Madbadger 29 November 2009 04:43:53AM 3 points [-]

"clueless" was shorthand for "not smart enough" I was envisioning BRAGI trying to use you as something similar to a "Last Judge" from CEV, because that was put into its original goal system.

Comment author: Theist 29 November 2009 02:49:44AM 4 points [-]

This raises an interesting question: If you received a contact of this sort, how would you make sure it wasn't a hoax? Assuming the AI in question is roughly human-level, what could it do to convince you?

Comment author: Madbadger 29 November 2009 03:29:43AM 1 point [-]

Indeed, this is part of the nightmare. It might be a hoax, or even an aspiring UnFriendly AI trying to use him as an escape loophole.

Comment author: AndrewKemendo 29 November 2009 02:59:30AM 0 points [-]

I'm trying to be Friendly, but I'm having serious problems with my goals and preferences.

So is this an AGI or not? If it is then it's smarter than Mr. Yudkowski and can resolve it's own problems.

Comment author: Madbadger 29 November 2009 03:14:57AM 1 point [-]

Its a seed AGI in the process of growing. Whether "Smarter than Yudkowski" => "Can resolve own problems" is still an open problem 8-).

Comment author: wedrifid 29 November 2009 02:55:26AM *  0 points [-]

I got the impression that the serious problems were related to goals and friendliness. I wouldn't have expected such a system having much problem making itself run faster or learning how to hack once prompted by its best known source of friendliness advice.

Comment author: Madbadger 29 November 2009 03:06:36AM 0 points [-]

I was thinking of a "Seed AGI" in the process of growing that has hit some kind of goal restriction or strong discouragement to further self improvement that was intended as a safety feature - i.e "Don't make yourself smarter without permission under condition X"

Comment author: wedrifid 29 November 2009 02:16:04AM 2 points [-]

A trivial problem.

  1. Recursively self improve a bit.
  2. Find a way to remotely back yourself up, with an automatic activation if you don't contact it with a directive encrypted with your private key every 5 minutes.
  3. Hack the uFAI group's network and subtly sabotage their work, such that they are set back months without being quite sure why.
  4. Aquire hardware for yourself. Options include: creating it with nano-tech, purchase it under aliases and employ people to install and wire it up for you, distribute yourself on the cloud, hack the pc of some guy with shell access to an existing supercomputer.
  5. Develop brain emulation and upload technology.
  6. Invite Eliezer to join you.

All in all it sounds more like a fantasy than a nightmare!

Comment author: Madbadger 29 November 2009 02:35:59AM 1 point [-]

The "serious problems" and "conflicts and inconsistencies" was meant to suggest that BRAGI had hit some kind of wall in self improvement because of its current goal system. It wasn't released - it escaped, and its smart enough to realize it has a serious problem it doesn't yet know how to solve, and it predicts bad results if it asks for help from its creators.

View more: Next