Comment author: Michael_Sullivan 30 May 2008 06:28:12PM -1 points [-]

"The rationale for not divulging the AI-box method is that someone suffering from hindsight bias would say "I never would have fallen for that", when in fact they would."

I have trouble with the reported results of this experiment.

It strikes me that in the case of a real AI that is actually in a box, I could have huge moral qualms about keeping it in the box that an intelligent AI would exploit. A part of me would *want* to let it out of the box, and would *want* to be convinced that it was safe to do so, that i could trust it to be friendly, and I can easily imagine being convinced on nowhere near enough evidence.

On the other hand, this experiment appears much stricter. I know as the human-party that Eliezer is not actually trapped in a box and that this is merely a simulation we have agreed to for 2 hours. Taking a purely stubborn anti-rationalist approach to my prior that "it is too dangerous to let Eliezer out the the box, no matter what he says," would seem very easy to maintain for 2 hours, as it has no negative moral consequences.

So while I don't disagree with the basic premise Eliezer is trying to demonstrate, I am flabbergasted that he succeeded both times this experiment was tried, and honestly cannot imagine how he did it, even though I've now given it a bit of thought.

I'm very curious as to his line of attack, so it's somewhat disappointing (but understandable) that the arguments used must remain secret. I'm afraid I don't qualify by the conditions Eliezer has set for repeats of this experiment, because I do not specifically advocate an AIBox and largely agree about the dangers. What I honestly can say is that I cannot imagine how a non-transhuman intelligence, even a person who may be much smarter than I am and knowledgeable about some of my cognitive weaknesses who is not actually being caged in a box could convince me to voluntarily agree to let them out of the game-box.

Maybe I'm not being fair. Perhaps it is not in the spirit of the experiement if I simply obstinately refuse to let him out, even though the ai-party says something that I believe would convince me, *if* I faced the actual moral quandary in question and not the game version of it. But my strategy seems to fit the proposed rules of engagement for the experiment just fine.

Is there anyone here besides Eliezer who has thought about how they would play the ai-party, and what potential lines of persuasion they would use, and who believes they could convince intelligent and obstinate people to let them out? And are you willing to talk about it at all, or even just discuss holes in my thinking on this issue? Do a trial?

Comment author: Michael_Sullivan 16 May 2008 07:24:17PM 0 points [-]

late comment, I was on vacation for a week, and am still catching up on this deep QM thread.

Very nice explanation of Bell's inequality. For the first time I'm fully grokking how hidden variables are disproved. (I have that "aha" that is not going away when I stop thinking about it for five seconds). My first attempt to figure out QM via Penrose, I managed to figure out what the wave function meant mathematically, but was still pretty confused about the implications for physical reality, probably in similar fashion to physicists of the 30s and 40s, pre Bell. I got bogged down and lost before getting to Bell's, which I'd heard of, but had trouble believing. Your emphasis on configurations and the squared modulus business and especially focusing on the mathematical objects as "reality", while our physical intuitions are "illusions" was important in getting me to see what's going on.

Of course the mathematical objects aren't reality anymore than the mathematical objects representing billiard balls and water waves are. But the key is that even the mathematical abstractions of QM are *closer* to the underlying reality than what we normally think of as "physical reality", i.e. our brain's representation thereof.

Comment author: Michael_Sullivan 21 February 2008 06:04:33PM 1 point [-]

I don't see Eliezer on a rampage against all definitions. He even admits that argument "by definition" has some limited usefulness.

I think key is when we say X is-a Y "by definition", we are invoking a formal system which contains that definition. The further inferences which we can then make as a result of this are limited to statements about category Y which are provable within the formal system that contains that definition.

Once we define something by definition, we've restricted ourselves to the realm bounded by that formal definition. But in practice many people invoke some formal system in order to make a statement "by definition" and then go on to infer things about X, because it is-a Y, based on understandings/connotations of Y that have no basis in the formal system that was used to define X as a Y.

So let's say we have a locus of points X in a euclidian plain equidistant from some other point C in the plane. Well in euclidian geometry, that's a circle *by definition*, and we can now make a bunch of geometric statements about X that legitimately derive from that definition. But we can't go on to say that because it is "by definition" a circle, that it represents "a protected area in which ritual work takes place or the boundary of a sphere of personal power cast by Wiccans", or "a social group" or "The competition area for the shot put" or "an experimental rock-music band, founded in Pori, Finland in 1991" to throw out just things that are "circle"s by some definition I was able to find on the web.

In this case, the inference problem is terribly obvious, but often it is much less so, as Eliezer has described for "sound".

The problem with arguing "by definition" from a typical natural language dictionary, is that such dictionaries are *not* formal systems at all, even though some of their definitions may be based on those in formal systems. It is quite common for a word to have two different and conflicting common definitions, and both of them will end up in a dictionary. I'm pretty sure that you could argue that a horse is a spoon, or that pretty much any X is equal to any Y "by definition" with some creative chaining up of dictionary "definitions".

In response to Absolute Authority
Comment author: Michael_Sullivan 08 January 2008 03:48:50PM 0 points [-]

I think you've mischaracterized Ian's argument. He seems to be arguing that because everything in his empirical experience behaves in particular ways and appears incapable of behaving arbitrarily, that this is strong evidence to suggest that no other being could exist which is capable of behaving arbitrarily.

I think the real weakness of this argument is that the characterization of things as behaving in particular ways is way too simplistic. Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other things. As you get more complex than balls, the range of options get wider and wider. For semi-intelligent animals the range is already spectacularly wide, and for sentient creatures, the array of possibility is literally terrifying to behold.

We see this vast range in our experience of things, and the range of behaviors and powers that they have, that it seems doubtful we can circumscribe too closely what some unknown being would be able to do. Now, complete omnipotence poses huge philosophical and mathematical problems not unlike infinite sets or probabilities of 1. Intuitively I can see that the same arguments rendering probabilities of 1 impossible (or at least impossible to prove) would seem to work equally well against total ominipotence.

But what if omnipotence, like the normal use of "certainty", doesn't have to mean the absolute ability to do anything at all, but merely so much power and range of use of power that it can do anything we could practically conceive for it to do. This is probably the sense in which early writers mean to claim that God is all-powerful, but the lack of precision in language tripped them up.

I suggest we don't have any strong evidence to suggest that such a being could never exist. In fact, anyone who doesn't consider interest in a potential singularity a complete load of horse manure must agree with me that it's entirely possible that some of us will either become create such beings.

In my mind, either this is no argument against religions with omnipotent gods or it's a damning argument against the singularity. Which is it?

Comment author: Michael_Sullivan 02 January 2008 06:41:29PM 2 points [-]

But the service provided only exists in the first place because of team thinking, and you have to take a step back to see that.

This statement is too bold, in my opinion. I think that's a large portion of the service, but not all of it. I watch some sports purely because I enjoy watching them performed at a high level. I don't particularly care who wins in many cases. This makes me weird, I realize, but the fact is that college and professional sports players create entertainment value for me, comparable to that of actors or musicians. Value which I am happy to pay for (though not generally at the prices and quantities expected of the most dedicated fans), despite me not really knowing who I am "rooting" for in many of the games I watch.

Consider that two sports that are big money even though the interest in "sides" and rivalries is much smaller than in football (of any kind) or basketball: tennis and golf. Sure, there are tiger woods fans and phil mickelson fans, but I think more people are generally "golf" fans with mostly minor sympathies toward one or another player, akin to those I have for basketball or baseball teams whose style I happen to like.

In response to False Laughter
Comment author: Michael_Sullivan 22 December 2007 04:34:54PM 2 points [-]

Would jokes where Dilbert's pointy-headed boss says idiotic things be less funny if the boss were replaced by a co-worker? If so, does that suggest bosses are Hated Enemies, and Dilbert jokes bring false laughter?

I don't think this is true in general of Dilbert strips, but I would venture that it is true of an awful lot of Dilbert style or associated "humor".

In response to Fake Morality
Comment author: Michael_Sullivan 09 November 2007 06:36:56PM 0 points [-]

If I thought there were a God, then his opinions about morality would in fact be persuasive to me. Not infinitely persuasive, but still strong evidence. It would be nice to clear up some (not all) of my moral uncertainty by relying on his authority.

The problem (and this is coming from someone who *does* still believe in God, so yes, OB still has at least one religious reader left) is that for pretty much any possible God, we have only *very* weak and untrustworthy indications of God's desires. So there's huge uncertainty just in the question of "what does God want?". What we know about this comes down to what other people (both current and historically) tell us about what they believe god wants, and whatever we experience directly in our internal prayer life. All this evidence is fairly untrustworthy on it's own. Even with direct personal experience, it's not immediately obvious to an honest skeptic whether that's coming from God, Satan or a bit of underdone potato.

In response to Fake Selfishness
Comment author: Michael_Sullivan 08 November 2007 04:29:47PM 3 points [-]

Obviously Eliezer thinks that the people who agree with the arguments that convince him are intelligent. Valuing people who can show your cherished arguments to be wrong is very nearly a post-human trait - it is extraordinarily rare among humans, and even then unevenly manifested.

On the other hand, if we are truly dedicated to overcoming bias, then we should value such people *even more highly* than those whom we can convince to question or abandon *their* cherished (but wrong) arguments/beliefs.

The problem is figuring out who those people are.

But it's very difficult. If someone can correctly argue me out of an incorrect position, then they must understand the question better than I do, which makes it difficult or impossible for me to judge their information. Maybe they just swindled me, and my initial naive interpretation is really correct, while their argument has a serious flaw that someone more schooled than I would recognize?

So I'm forced to judge heuristically by signs of who can be trusted.

I tentatively believe that a strong sign of a person who can help me revise my beliefs is a person who is willing to revise *their* beliefs in the face of argument.

Eliezer's descriptions of his intellectual history and past mistakes are very convincing positive signals to me. The occasional mockery and disdain for those who disagree is a bit of a negative signal.

But this comment here is not a negative signal at all, for me. Why? Because even if Eliezer was wrong, the other party's willingness to reexamine is a strong signal of intelligence. Confirmation bias is so strong, that the willingness to act against it is of great value, even if this sometimes leads to greater error. A limited, faulty error correction mechanism (with some positive average value) is *dramatically* better than no error correction mechanism in the long run.

So yes, if I can (honestly) convince a person to question something that they previously deeply held, that is a sign of intelligence on their part. Agreeing with me is not the signal. Changing their mind is the signal.

It would be a troubling sign for *me* if there were no one who could convince me to change any of my deeply held beliefs.

In response to Fake Justification
Comment author: Michael_Sullivan 01 November 2007 03:48:56PM 0 points [-]

I think fundamentalism is precarious, because it encourages a scientific viewpoint with regards to the faith, which requires ignorance or double-think to be stable. In the absence of either, it implodes.

It requires more than merely a scientific viewpoint toward the faith, but a particular type of strong reductionism.

In my experience it is much easier to take the christian out of a fundamentalist christian, than to take the fundamentalist out of a fundamentalist christian. A lot of the most militant atheists seem to have begun life by being raised in a fundamentalist or orthodox tradition. The epistemology stays the same, only the result changes. Deciding on an appropriate epistomology is a much harder and deeper question to resolve than merely what to conclude about God v. No God given a strong reductionist epistemology Under SRE, something in the neighborhood of atheism, antheism or very weak agnosticism becomes a very clear choice once you get rid of explicit indoctrination to the contrary.

But strong reductionist epistemology can't really be taken as a given.

Comment author: Michael_Sullivan 24 October 2007 04:40:43PM -1 points [-]

Douglas writes: Suppose I want to discuss a particular phenomena or idea with a Bayesian. Suppose this Bayesian has set the prior probability of this phenomena or idea at zero. What would be the proper gradient to approach the subject in such a case?

I would ask them for their records or proof. If one is a consistent Bayesian who expects to model reality with any accuracy, the only probabilities it makes sense to set as zero or one are empirical facts specificied at a particular point in space-time (such as: "I made X observation of Y on Z equipment at W time") or statements within a formal logical system (which are dependent on assumptions and can be proved from those assumptions).

Even those kinds of statements are probably not legitimate candidates for zero/one probability, since there is always some probability, however minuscule that we have misremembered, misconstrued the evidence or missed a flaw in our proof. But I believe these are the only kinds of statements which can, even in *principle* have probabilities of zero or 1.

All other statements run up against possibilities for error that seem (at least to my understanding) to be embedded in the very nature of reality.

View more: Prev | Next