Comment author: Silas 29 October 2008 02:42:42AM 2 points [-]

With all this talk about poisoned meat and CDSes, I was inspired to draw this comic.

Comment author: Silas 28 October 2008 10:04:44PM 2 points [-]

Adam_Ierymenko: Evolution has evolved many strategies for evolution-- this is called the evolution of evolvability in the literature. These represent strategies for more efficiently finding local maxima in the fitness landscape under which these evolutionary processes operate. Examples include transposons, sexual reproduction,

Yes, Eliezer_Yudkowsky has discussed this before and calls that optimizaiton at the meta-level. Here is a representative post where he makes those distinctions.

Looking over the history of optimization on Earth up until now, the first step is to conceptually separate the meta level from the object level - separate the structure of optimization from that which is optimized.

If you consider biology in the absence of hominids, then on the object level we have things like dinosaurs and butterflies and cats. On the meta level we have things like natural selection of asexual populations, and sexual recombination.

Comment author: Silas 14 October 2008 11:01:03PM 3 points [-]

in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder ... I refuse to extend this reply to myself, because the epistemological state you ask me to imagine, can only exist among other kinds of people than human beings.

Interesting reply. But the AIs are programmed by corrupted humans. Do you really expect to be able to check the full source code? That you can outsmart the people who win obfuscated code contests?

How is the epistemological state of human-verified, human-built, non-corrupt AIs, any more possible?

Comment author: Silas 09 October 2008 09:50:57PM 0 points [-]

@Phil_Goetz: Have the successes relied on a meta-approach, such as saying, "If you let me out of the box in this experiment, it will make people take the dangers of AI more seriously and possibly save all of humanity; whereas if you don't, you may doom us all"?

That was basically what I suggested in the previous topic, but at least one participant denied that Eliezer_Yudkowsky did that, saying it's a cheap trick, while some non-participants said it meets the spirit and letter of the rules.

Comment author: Silas 09 October 2008 05:06:23PM 1 point [-]

One more thing: my concerns about "secret rules" apply just the same to Russell_Wallace's defense that there were no "cheap tricks". What does Russell_Wallace consider a non-"cheap trick" in convincing someone to voluntarily, knowingly give up money and admit they got fooled? Again, secret rules all around.

Comment author: Silas 09 October 2008 05:00:30PM 1 point [-]

@Russell_Wallace & Ron_Garret: Then I must confess the protocol is ill-defined to the point that it's just a matter of guessing what secret rules Eliezer_Yudkowsky has in mind (and which the gatekeeper casually assumed), which is exactly why seeing the transcript is so desirable. (Ironically, unearthing the "secret rules" people adhere to in outputting judgments is itself the problem of Friendliness!)

From my reading, the rules literally make the problem equivalent to whether you can convince people to give money to you: They must *know* that letting the AI out of the box means ceding cash, and that not losing that cash is simply a matter of not being willing to.

So that leaves only the possibility that the gatekeeper feels obligated to take on the frame of some other mind. That reduces AI's problem to the problem of whether a) you can convince the gatekeeper that *that* frame of mind would let the AI out, and b) that, for purposes of that amount of money, they are ethically obligated to let the experiment end as per how that frame of mind would.

...which isn't what I see as the protocol specifying: it seems to me to instead specify the participant's own mind, not some mind he imagines. Which is why I conclude the test is too ill-defined.

Comment author: Silas 09 October 2008 04:10:10PM 7 points [-]

When first reading the AI-Box experiment a year ago, I reasoned that if you follow the rules and spirit of the experiment, the gatekeeper must be convinced to knowingly give you $X and knowingly show gullibility. From that perspective, it's impossible. And even if you could do it, that would mean you've solved a "human-psychology-complete" problem and then [insert point about SIAI funding and possibly about why you don't have 12 supermodel girlfriends].

Now, I think I see the answer. Basically, Eliezer_Yudkowsky doesn't really have to convince the gatekeeper to stupidly give away $X. All he has to do is convince them that "It would be a good thing if people saw that the result of this AI-Box experiment was that the human got tricked, because that would stimulate interest in {Friendliness, AGI, the Singularity}, and that interest would be a good thing."

That, it seems, is the one thing that would make people give up $X in such a circumstance. AFAICT, it adheres to the spirit of the set-up since the gatekeeper's decision would be completely voluntary.

I can send my salary requirements.

Comment author: Silas 01 October 2008 03:47:37PM 0 points [-]

1) Eliezer_Yudkowsky: You should be comparing the percentage (1) change in the S&P 500 (2) to the change (3) in probability of *any* bailout happening (4) over the days in which the changes occurred (5) and have used more than one day (6). There, that's six errors in your calculation I count, of varying severity.

2) Tim_Tyler: Yeah, I'm surprised that hasn't been posted on Slashdot yet. I want to be the first to propose the theory that United Airlines was behind that, since Google was the cause of a recent fake plunge in United's stock price, when they highly ranked an old story about United's bankruptcy, fooling some into thinking it was happening again and they need to sell. Okay, maybe not "cause", but they started the chain reaction, and United blames them.

3) Peter_McCluskey: Whoa whoa whoa, are you now admitting that measuring the correlation between InTrade contracts and financial variables over a *succession* of days rather than a single day is important?

In response to Ban the Bear
Comment author: Silas 20 September 2008 02:04:59AM 0 points [-]

V, Ori, and everyone else: In my recent post, I explain how you can synthesize short and long positions. You have to ban a lot more than short-selling to ban short-selling, and a lot more than margin-buying to ban leveraged longs.

In response to Optimization
Comment author: Silas 14 September 2008 05:46:40AM 0 points [-]

@Lara_Foster: You see, it seems quite likely to me that humans evaluate utility in such a circular way under many circumstances, and therefore aren't performing any optimizations.

Eliezer touches on that issue in "Optimization and the Singularity":

Natural selection prefers more efficient replicators. Human intelligences have more complex preferences. Neither evolution nor humans have consistent utility functions, so viewing them as "optimization processes" is understood to be an approximation.

By the way, Ask middle school girls to rank boyfriend preference and you find Billy beats Joey who beats Micky who beats Billy...

Would you mind peeking into your mind and explaining why that arises? :-) Is it just a special case of the phenomenon you described in the rest of your post?

View more: Prev | Next