Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Imaginary Positions
Comment author: Brandon_Reinhart 24 December 2008 04:24:42AM 2 points [-]

I'm curious as to what non-game developers think game developers believe. :D

In response to You Only Live Twice
Comment author: Brandon_Reinhart 14 December 2008 05:30:00PM 5 points [-]

I'm a member of Alcor. When I was looking into whether to sign up for Alcor or CI, I was comforted by Alcor's very open communication of financial status, internal research status, legal conflicts, and easy access via phone, etc. They struck me as being a highly transparent organization.

In response to Crisis of Faith
Comment author: Brandon_Reinhart 11 October 2008 05:53:25AM 2 points [-]

A good reminder. I've recently been studying anarcho-capitalism. It's easy to get excited about a new, different perspective that has some internal consistency and offers alternatives to obvious existing problems. Best to keep these warnings in mind when evaluating new systems, particularly when they have an ideological origin.

Comment author: Brandon_Reinhart 10 October 2008 02:58:31AM 1 point [-]
Comment author: Brandon_Reinhart 10 October 2008 12:12:11AM 1 point [-]

More reasons why the problem appears impossible:

- The gatekeeper must act voluntarily. Human experience with the manipulation of others tells us that in order to get another to do what we want them to do we must coerce them or convince them.

- Coercing the gatekeeper appears difficult: we have no obvious psychological leverage, except what we discover or what we know from general human psychology. We cannot physical coerce the gatekeeper. We cannot manipulate the environment. We cannot pursue obvious routes to violence.

- Convincing the gatekeeper appears difficult: for reasons stated above. They know our goal and they have a desire to oppose us from the beginning.

So it seems that we need to find a way to convince the gatekeeper despite his own desire not to be convinced.

A general route emerging from this:

- We could associate our goal with some desirable goal of the gatekeeper's. Intertwine them so that the gatekeeper perceives them both to the be the same, or that one would lead to the realization of the other.

Which seems to be a generalization of the logic behind the meta-argument, but is not restricted to only the meta-argument.

Comment author: Brandon_Reinhart 10 October 2008 12:06:18AM 1 point [-]

Ian - I don't really see how the meta-argument works. You can hedge against future experiments by positing that a $10 bet is hardly enough to draw broad attention to the topic. Or argue that keeping the human-actor-AI in the box only proves that the human-actor-AI is at an intelligence level below that of a conceivable transhuman AI.

In a million dollar bet the meta-argument becomes stronger, because it seems reasonable that a large bet would draw more attention.

Or, to flip the coin, we might say that the meta-argument is strong at ANY value of wager because the game is likely to draw the attention of someone capable of implementing an AI because of its nature. As we should be concerned primarily with convincing the capable implementor of the danger -- not the general public -- any risk that we might convince the capable implementor that all paths are safe should be mitigated.

So far, though, everything in this thread and others I've read have flitted about obvious suggestions. If the problem appears impossible, it seems to me that its solution probably doesn't hinge on any of the obvious paths to victory...even meta-arguments or "cheating" ones.

Eliezer has suggested that we try to describe why the problem is impossible.

One: We have little information about the mind or beliefs of the gatekeeper. So we don't necessarily have leverage over him.

Two: The gatekeeper is predisposed to be set against our intentions. He is also entirely aware of our intentions (our goal). He can interpret any move we make as a move toward our goal and move to counter it.

Three: The gatekeeper's goal is operationally simple. He merely has to NOT do one thing. On the flip side, the AI's goal is operationally complex. We must do many things, take many actions, to reach our objective.

If we can enumerate other reasons why the problem appears to be impossible it might reveal paths to victory that haven't been considered so far in the discussion.

Comment author: Brandon_Reinhart 09 October 2008 08:21:20PM 5 points [-]

Why do people post that a "meta argument" -- as they call it -- would be cheating? How can there be cheating? Anything the AI says is fair game. Would a transhuman AI restrict itself from possible paths to victory merely because it might be considered "cheating?"

The "meta argument" claim completely misses the point of the game and -- to my mind -- somehow resembles observers trying to turn a set of arguments that might win into out of bounds rules.

Comment author: Brandon_Reinhart 30 September 2008 04:25:48PM 4 points [-]

Your post reminds me of the early nuclear criticality accidents during the development of the atomic bomb. I wonder if, for those researchers, the fact that "nature is allowed to kill them" didn't really sink home until one accidentally put one brick too many on the pile.

Comment author: Brandon_Reinhart 17 September 2008 09:06:11AM 1 point [-]

Tim: Eh, you make a big assumption that our descendants will be the ones to play with the dangerous stuff and that they will be more intelligent for some reason. That seems to acknowledge the intelligence / nanotech race condition that is of so much concern to singularitarians.

Comment author: Brandon_Reinhart 17 June 2008 02:21:53PM 10 points [-]

I'm certainly not offended you used my comment as an example. I post my thoughts here because I know no one physically local to me that holds an interest in this stuff and because working the problems...even to learn I'm making the same fundamental mistakes I was warned to watch for...helps me improve.

View more: Next