Comment author: ikrase 30 January 2013 09:20:14AM 1 point [-]

I... don't believe that.

I think that making a FOOM-able AI is much easier than making an AI that can break out of a (considerably stronger) lead box in solar orbit.

Comment author: vi21maobk9vp 30 January 2013 08:12:50PM 0 points [-]

And you are completely right.

I meant that designing a working FOOM-able AI (or non-FOOMable AGI, for that matter) is vastly harder than finding a few hypothetical hihg-risk scenarios.

I.e. walking the walk is harder than talking the talk.

Comment author: Desrtopa 29 January 2013 06:46:56PM *  0 points [-]

All the suggestions so far that might allow an AI without conventional outputs to get out would be overcome by the lead box+ocean defenses. I don't think that containing a strong AI is likely to be that difficult a problem. The really difficult problem is containing a strong AI while getting anything useful out of it.

Comment author: vi21maobk9vp 30 January 2013 05:09:15AM -2 points [-]

If we are not inventive enough to find a menace not obviously shielded by lead+ocean, more complex tasks like, say, actually designing FOOM-able AI is beyond us anyway…

Comment author: Desrtopa 29 January 2013 04:33:48AM -1 points [-]

I would think that the sorts of hypotheticals that would be most useful to entertain would be ones that explore the safety of the most secure systems anyone would have an actual incentive to implement.

Could you contain a Strong AI running on a computer with no output systems, sealed in a lead box at the bottom of the ocean? Presumably yes, but in that case, you might as well skip the step of actually making the AI.

Comment author: vi21maobk9vp 29 January 2013 06:38:27PM 3 points [-]

You say "presumably yes". The whole point of this discussion is to listen to everyone who will say "obviously no"; their arguments would automatically apply to all weaker boxing techniques.

Comment author: DataPacRat 20 January 2013 08:20:05AM 8 points [-]

If you don't mind the question: How confident are /you/ in the existence of cats?

The reason I ask, is that I'm still trying to train myself to think of probability, evidence, confidence, and similar ideas logarithmically rather than linearly. 10 decibans of confidence means 90% odds, 20 decibans means 99%, 30 means 99.9%, and so on. However, 100% confidence in any proposition translates to an infinite number of decibans - requiring an infinite amount of evidence to achieve. So far, the largest amount of confidence given in the posts here is about 100 decibans... and there is a very large difference between '100 decibans' and 'infinite decibans'. And that difference has some consequences, in terms of edge cases of probability estimation with high-value consequences; which has practical implications in terms of game theory, and thus politics, and thus which actions to choose in certain situations. While the whole exercise may be a waste of time for you, I feel that it isn't for me.

Comment author: vi21maobk9vp 20 January 2013 12:42:13PM 2 points [-]

How much evidence do you have that you can count accurately (or make a corect request to computer and interpret results correctly)? How much evidence that probability theory is a good description of events that seem random?

Once you get as much evidence for atomic theory as you have for the weaker of the two claims above, describing your degree of confidence requires more efforts than just naming a number.

Comment author: Qiaochu_Yuan 09 January 2013 12:40:43AM 7 points [-]

Suggestion: if your goal is AI, maybe abandon logic and set theory in favor of type theory. It seems better suited to computers anyway thanks to computational trinitarianism. The problem of pinning down models uniquely seems to be solved by something called the univalence axiom, which I currently don't understand at all.

Comment author: vi21maobk9vp 09 January 2013 08:54:08AM 1 point [-]

I guess that understanding univalence axiom would be helped by understanding the implicit equality axioms.

Univalence axiom states that two isomorphic types are equal; this means that if type A is isomorphic to type B, and type C contains A, C has to contain B (and a few similar requirements).

Requiring that two types are equal if they are isomorphic means prohibiting anything that we can write to distinguish them (i.e. not to handle them equivalently).

Comment author: Eliezer_Yudkowsky 06 January 2013 02:07:15PM 1 point [-]

I'm asking about the process that causes other mathematical beliefs to generalize to your beliefs about physical time in such fashion that physical time always seems to have the smallest model allowed by any of your mathematical beliefs. When I learn that the ordinal epsilon-0 corresponds to an ordering on unordered finitely branching trees, I don't conclude that a basket of apples is made out of little tiny unordered trees. What do the physical apples have to do with ordinals, after all?

Why, as you come to believe that Zermelo-Fraenkel set theory has a model, do you come to believe that physical time will never show you a moment when a machine checking for ZF-inconsistency proofs halts? Why shouldn't physical time just be a random model of PA instead, allowing it to have a time where ZF is proven inconsistent? Why do you transfer beliefs from one domain to the other - or what law makes them the same domain?

This isn't meant to be an unanswerable question, I suspect it's answerable, I'm asking if you have any particular ideas about the mechanics.

Comment author: vi21maobk9vp 06 January 2013 02:56:23PM 1 point [-]

Could you please clarify your question here?

Why, as you come to believe that Zermelo-Fraenkel set theory has a model, do you come to believe that physical time will never show you a moment when a machine checking for ZF-inconsistency proofs halts?

I try to intepret it (granted, I interpret it in my worldview which is different) and I cannot see the question here.

I am not 100% sure whether even PA has a model, but I find it likely that even ZFC has. But if I say that ZFC has a model, it means that this is a model where formula parts are numbered by the natural numbers derived from my notion of subsequent moments of time.

Comment author: [deleted] 05 January 2013 01:18:08AM 0 points [-]

"If there's a collection of third-order axioms that characterizes a model, there's a collection of second-order axioms that characterizes the same model. Once you make the jump to second-order logic, you're done - so far as anyone knows (so far as I know) there's nothing more powerful than second-order logic in terms of which models it can characterize."

You clearly can state Continuum Hypothesis in the higher order logic, while a 2nd order formulation seems elusive. Are you sure about it?

In response to comment by [deleted] on Second-Order Logic: The Controversy
Comment author: vi21maobk9vp 06 January 2013 01:59:46PM 2 points [-]

Link is good, but I guess direct explanation of this simple thing could be useful.

It is not hard to build explicit map between R and R² (more or less interleaving the binary notations for numbers).

So the claim of Continuum Hypothesis is:

For every property of real numbers P there exists such a property of pairs of real numbers, Q such that:

1) ∀x (P(x) -> ∃! y Q(x,y))

2) ∀x (¬P(x) -> ∀y¬Q(x,y))

(i.e. Q describes mapping from support of P to R)

3) ∀x1,x2,y: ((Q(x1,y)^Q(x2,y)) -> x1=x2)

(i.e. the map is an injection)

4) (∀y ∃x Q(x,y)) ∨ (∀x∀y (Q(x,y)-> y∈N))

(i.e. map is either surjection to R or injection to a subset of N)

These conditions say that every subset of R is either the size of R or no bigger than N.

Comment author: [deleted] 05 January 2013 07:22:09PM 4 points [-]

What I got out of this post is that second-order logic is a lie.

"The mathematicians who deny second-order logic would see that reasoning as invoking an implicit background universe of set theory. Everything you're saying makes sense relative to some particular model of set theory, which would contain possible models of Peano arithmetic as sets, and could look over those sets and pick out the smallest in that model. Similarly, that set theory could look over a proposed model for a many-sorted logic, and say whether there were any subsets within the larger universe which were missing from the many-sorted model. Basically, your brain is insisting that it lives inside some particular model of set theory. And then, from that standpoint, you could look over some other set theory and see how it was missing subsets that your theory had."

Second-order logic is how first-order set theory feels from the inside.

And now I understand (and agree with) Johnicholas' comment from the last post. ZFC is a hack, but it is a very good hack which fits with our intuitions. However, after reading this post, I think that to use second-order logic is to decieve yourself into thinking that ZFC is the universally, unequivocally best definition of a set. I'm not anywhere close to 100% confident (though I'm still well over 50%) that ZFC is even consistent!

Good question. One obvious answer is that the AI would be able to induct what you would call an infinite axiom schema, as a single axiom - a simple, finite hypothesis.

That can easily be worked around. One answer (the one I'm intimately familiar with, not necessarily the best one) is to use first-order metalogic, which proves theorem schema rather than theorems (note that most theorem schema end up isomorphic to theorems, as well). This is the approach Metamath takes, and they have created a metalogically complete axiomatization of first-order metalogic with equality, which ends up allowing ZFC to be finitely axiomatized.

In response to comment by [deleted] on Second-Order Logic: The Controversy
Comment author: vi21maobk9vp 06 January 2013 10:19:00AM 1 point [-]

ZFC is the universally, unequivocally best definition of a set

Worse. You are being tricked into believing that ZFC is at all a definition of a set at all, while it is just a set of restrictions on what we would tolerate.

In some sense, if you believe that there is only one second-order model of natural numbers, you have to make decisions what are the properties of natural numbers that you can range over; as Cohen has taught us, this involves making a lot of set-theoretical decisions with continuum hypothesis being only one of them.

Comment author: Eliezer_Yudkowsky 05 January 2013 12:16:57PM 3 points [-]

So after reading that, I don't see how it could be true even in the sense described in the article without violating Well Foundation somehow, but what it literally says at the link is that every model of ZFC has an element which encodes a model of ZFC, not is a model of ZFC, which I suppose must make a difference somehow - in particular it must mean that we don't get A has an element B has an element C has an element D ... although I don't see yet why you couldn't construct that set using the model's model's model and so on. I am confused about this although the poster of the link certainly seems like a legitimate authority.

But yes, it's possible that the original paragraph is just false, and every model of ZFC contains a quoted model of ZFC. Maybe the pair-encoding of quoted models enables there to be an infinite descending sequence of submodels without there being an infinite descending sequence of ranks, the way that the even numbers can encode the numbers which contain the even numbers and so on indefinitely, and the reason why ZFC doesn't prove ZFC has a model is that some models have nonstandard axioms which the set modeling standard-ZFC doesn't entail. Anyone else want to weigh in on this before I edit? (PS upvote parent and great-grandparent.)

Comment author: vi21maobk9vp 05 January 2013 06:43:47PM 0 points [-]

Well Foundation in V_alpha case seems quite simple: you build externally-countable chain of subsets which simply cannot be represented as a set inside the first model of ZFC. So the external WF is not broken because the element-relation inside the models is different, and the inner WF is fine because the chain of inner models of external ZFC is not an inner set.

In the standard case your even-numbers explanation nicely shows what goes on — quoting is involved.

I need to think a bit to say what woud halt our attempts to build a chain of transitive countable models...

Comment author: Eliezer_Yudkowsky 05 January 2013 04:49:53PM 3 points [-]

Linked impressive authority says the model has a ZFC-model-encoding element, plus enough nonstandard quoted ZFC axioms in-model that in-model ZFC doesn't think it's a ZFC-model-encoding element. I.e., the formula for "semantically entails ZFC" is false within the model, but from outside, using our own standard list of axioms, we think the element is a model of ZFC.

Comment author: vi21maobk9vp 05 January 2013 05:26:42PM 0 points [-]

Ah, sorry, I didn't notice that the question is about model of ZFC inside a "universe" modelling ZFC+Con^∞(ZFC)

View more: Next