Comment author: Andreas_Giger 31 January 2013 03:17:55AM -3 points [-]

Nothing in your post or the proceeding discussion is of a "rather mathematical nature", let alone a precise specification of a mathematical problem.

Given a problem A, find an analogous problem B with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions, or prove that such a problem B cannot exist.

You do realize that game theory is a branch of mathematics, as is decision theory? That we are trying to prove something here, not by empirical evidence, but by logic and reason alone? What do you think this is, social economics?

Comment author: earthwormchuck163 31 January 2013 03:37:07AM 1 point [-]

Your question is not stated in anything like the standard terminology of game theory and decision theory. It's also not clear what you are asking on an informal level. What do you mean by "analogous"?

Comment author: Andreas_Giger 31 January 2013 02:31:16AM *  0 points [-]

It's not the antagonistic tone of your comments that puts me off, it's the way in which you seem to deliberately not understand things. For example my definition of analogous — what else could you possibly have expected in this context? No, don't answer that.

I genuinely don't understand what question you're asking

I believe I have said everything already, but I'll put it in a slightly different way:

Given a problem A, find an analogous problem B with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions, or prove that such a problem B cannot exist.

For instance, how can we find a problem that is analogous to Newcomb, but without Omega? I have described such an analogous problem in my top-level post and demonstrated how CDT agents will in the initial state not make the analogous decision. What we're looking for is a problem in which any imaginable agent would, and we can prove it. If we believe that such a problem cannot exist without Omega, how can we prove that?

The meaning of analogous should be very clear by now. Screw practical and impractical.

As an aside note, I don't know what kind of stuff they teach at US grad schools, but what's of help here is familiarity with methods of proof and a mathematical mindset rather than mathematical knowledge, except some basic game theory and decision theory. As far as I know, what I'm trying to do here is uncharted territory.

Comment author: earthwormchuck163 31 January 2013 02:42:18AM 0 points [-]

I'll give you a second data point to consider. I am a soon-to-be-graduated pure math undergraduate. I have no idea what you are asking, beyond very vague guesses. Nothing in your post or the proceeding discussion is of a "rather mathematical nature", let alone a precise specification of a mathematical problem.

If you think that you are communicating clearly, then you are wrong. Try again.

Comment author: earthwormchuck163 18 January 2013 04:39:53PM 2 points [-]

Oh wow this is so obvious in hindsight. Trying this asap thank you.

Comment author: Endovior 11 January 2013 09:39:26PM -2 points [-]

Machines aren't capable of evil. Humans make them that way.

-Lucca, Chrono Trigger

Comment author: earthwormchuck163 11 January 2013 10:03:38PM 4 points [-]

That line always bugged me, even when I was a little kid. It seems obviously false (especially in the in-game context).

I don't understand why this is a rationality quote at all; Am I missing something, or is it just because of the superficial similarity to some of EY's quotes about apathetic uFAIs?

Comment author: [deleted] 11 January 2013 08:33:02PM 2 points [-]

a pill that makes ordinary experience awesome

Psychedelic drugs already exist...

In response to comment by [deleted] on Morality is Awesome
Comment author: earthwormchuck163 11 January 2013 08:41:05PM 4 points [-]

One time my roommate ate shrooms, and then he spent about 2 hours repeatedly knocking over an orange juice jug, and then picking it up again. It was bizarre. He said "this is the best thing ever" and was pretty sincere. It looked pretty silly from the outside though.

Comment author: lavalamp 10 January 2013 04:50:32PM 1 point [-]

Thanks. Hm. I think I see why that can't be said in first order logic.

...my brain is shouting "if I start at 0 and count up I'll never reach a nonstandard number, therefore they don't exist" at me so loudly that it's very difficult to restrict my thoughts to only first-order ones.

Comment author: earthwormchuck163 11 January 2013 02:22:41AM 2 points [-]

This is largely a matter of keeping track of the distinction between "first order logic: the mathematical construct" and "first order logic: the form of reasoning I sometimes use when thinking about math". The former is an idealized model of the latter, but they are distinct and belong in distinct mental buckets.

It may help to write a proof checker for first order logic. Or alternatively, if you are able to read higher math, study some mathematical logic/model theory.

Comment author: jimrandomh 09 January 2013 03:48:56PM *  19 points [-]

There are two major branches of programming: Functional and Imperative. Unfortunately, most programmers only learn imperative programming languages (like C++ or python). I say unfortunately, because these languages achieve all their power through what programmers call "side effects". The major downside for us is that this means they can't be efficiently machine checked for safety or correctness. The first self-modifying AIs will hopefully be written in functional programming languages, so learn something useful like Haskell or Scheme.

Please be careful about exposing programmers to ideology; it frequently turns into politics kills their minds. This piece in particular is a well-known mindkiller, and I have personally witnessed great minds acting very stupid because of it. The functional/imperative distinction is not a real one, and even if it were, it's less important to provability than languages' complexity, the quality of their type systems and the amount of stupid lurking in their dark corners.

Comment author: earthwormchuck163 10 January 2013 08:18:19AM 1 point [-]

I have personally witnessed great minds acting very stupid because of it.

I'm curious. Can you give a specific example?

Comment author: NancyLebovitz 09 January 2013 06:07:41PM 3 points [-]

What I was thinking was "would you expect a FAI to do its own research about what it needs to for people to be physically safe enough, or should something on the subject be built in?

Comment author: earthwormchuck163 10 January 2013 08:15:48AM 1 point [-]

Note that this actually has very little to do with most of the seemingly hard parts of FAI theory. Much of it would be just as important if we wanted to create a recursively self modifying paper-clip maximizer, and be sure that it wouldn't accidentally end up with the goal of "do the right thing".

The actual implementation is probably far enough away that these issues aren't even on the radar screen yet.

Comment author: lavalamp 21 December 2012 03:09:51PM 0 points [-]

Thanks!

I suppose you can't prove a statement like "No matter how many times you expand this infinite family of axioms, you'll never encounter a non-standard number" in first-order logic? Should I not think of numbers and non-standard numbers as having different types? Or should I think of > as accepting differently typed things? (where I'm using the definition of "type" from computer science, e.g. "strongly-typed language")

Comment author: earthwormchuck163 10 January 2013 07:13:32AM 0 points [-]

Sorry I didn't answer this before; I didn't see it. To the extent that the analogy applies, you should think of non-standard numbers and standard numbers as having the same type. Specifically, the type of things that are being quantified over in whatever first order logic you are using. And you're right that you can't prove that statement in first order logic; Worse, you can't even say it in first order logic (see the next post, on Godel's theorems and Compactness/Lowenheim Skolem for why).

Comment author: earthwormchuck163 10 January 2013 12:19:51AM 11 points [-]

I am well versed in most of this math, and a fair portion of the CS (mostly the more theoretical parts, not so much the applied bits). Should I contact you now, or should I study the rest of that stuff first?

In any case, this post has caused me to update significantly in the direction of "I should go into FAI research". Thanks.

View more: Prev | Next