Comment author: earthwormchuck163 18 January 2013 04:39:53PM 2 points [-]

Oh wow this is so obvious in hindsight. Trying this asap thank you.

Comment author: Endovior 11 January 2013 09:39:26PM -2 points [-]

Machines aren't capable of evil. Humans make them that way.

-Lucca, Chrono Trigger

Comment author: earthwormchuck163 11 January 2013 10:03:38PM 4 points [-]

That line always bugged me, even when I was a little kid. It seems obviously false (especially in the in-game context).

I don't understand why this is a rationality quote at all; Am I missing something, or is it just because of the superficial similarity to some of EY's quotes about apathetic uFAIs?

Comment author: [deleted] 11 January 2013 08:33:02PM 2 points [-]

a pill that makes ordinary experience awesome

Psychedelic drugs already exist...

In response to comment by [deleted] on Morality is Awesome
Comment author: earthwormchuck163 11 January 2013 08:41:05PM 4 points [-]

One time my roommate ate shrooms, and then he spent about 2 hours repeatedly knocking over an orange juice jug, and then picking it up again. It was bizarre. He said "this is the best thing ever" and was pretty sincere. It looked pretty silly from the outside though.

Comment author: lavalamp 10 January 2013 04:50:32PM 1 point [-]

Thanks. Hm. I think I see why that can't be said in first order logic.

...my brain is shouting "if I start at 0 and count up I'll never reach a nonstandard number, therefore they don't exist" at me so loudly that it's very difficult to restrict my thoughts to only first-order ones.

Comment author: earthwormchuck163 11 January 2013 02:22:41AM 2 points [-]

This is largely a matter of keeping track of the distinction between "first order logic: the mathematical construct" and "first order logic: the form of reasoning I sometimes use when thinking about math". The former is an idealized model of the latter, but they are distinct and belong in distinct mental buckets.

It may help to write a proof checker for first order logic. Or alternatively, if you are able to read higher math, study some mathematical logic/model theory.

Comment author: jimrandomh 09 January 2013 03:48:56PM *  19 points [-]

There are two major branches of programming: Functional and Imperative. Unfortunately, most programmers only learn imperative programming languages (like C++ or python). I say unfortunately, because these languages achieve all their power through what programmers call "side effects". The major downside for us is that this means they can't be efficiently machine checked for safety or correctness. The first self-modifying AIs will hopefully be written in functional programming languages, so learn something useful like Haskell or Scheme.

Please be careful about exposing programmers to ideology; it frequently turns into politics kills their minds. This piece in particular is a well-known mindkiller, and I have personally witnessed great minds acting very stupid because of it. The functional/imperative distinction is not a real one, and even if it were, it's less important to provability than languages' complexity, the quality of their type systems and the amount of stupid lurking in their dark corners.

Comment author: earthwormchuck163 10 January 2013 08:18:19AM 1 point [-]

I have personally witnessed great minds acting very stupid because of it.

I'm curious. Can you give a specific example?

Comment author: NancyLebovitz 09 January 2013 06:07:41PM 3 points [-]

What I was thinking was "would you expect a FAI to do its own research about what it needs to for people to be physically safe enough, or should something on the subject be built in?

Comment author: earthwormchuck163 10 January 2013 08:15:48AM 1 point [-]

Note that this actually has very little to do with most of the seemingly hard parts of FAI theory. Much of it would be just as important if we wanted to create a recursively self modifying paper-clip maximizer, and be sure that it wouldn't accidentally end up with the goal of "do the right thing".

The actual implementation is probably far enough away that these issues aren't even on the radar screen yet.

Comment author: lavalamp 21 December 2012 03:09:51PM 0 points [-]

Thanks!

I suppose you can't prove a statement like "No matter how many times you expand this infinite family of axioms, you'll never encounter a non-standard number" in first-order logic? Should I not think of numbers and non-standard numbers as having different types? Or should I think of > as accepting differently typed things? (where I'm using the definition of "type" from computer science, e.g. "strongly-typed language")

Comment author: earthwormchuck163 10 January 2013 07:13:32AM 0 points [-]

Sorry I didn't answer this before; I didn't see it. To the extent that the analogy applies, you should think of non-standard numbers and standard numbers as having the same type. Specifically, the type of things that are being quantified over in whatever first order logic you are using. And you're right that you can't prove that statement in first order logic; Worse, you can't even say it in first order logic (see the next post, on Godel's theorems and Compactness/Lowenheim Skolem for why).

Comment author: earthwormchuck163 10 January 2013 12:19:51AM 11 points [-]

I am well versed in most of this math, and a fair portion of the CS (mostly the more theoretical parts, not so much the applied bits). Should I contact you now, or should I study the rest of that stuff first?

In any case, this post has caused me to update significantly in the direction of "I should go into FAI research". Thanks.

Comment author: MarkusRamikin 08 January 2013 10:18:11PM *  0 points [-]

I see, thanks.

Of course Kyubey never reveals how much saving-of-the-universe Madoka's life would pay for exactly. It's not just her life (and suffering) they want, but all the MGs in history, past and future, for an unspecified extention of the Universe's lifespan...

Comment author: earthwormchuck163 08 January 2013 10:52:12PM 0 points [-]

Also, Kyubey clearly has pretty drastically different values from people, and thus his notion of saving the universe is probably not quite right for us.

Comment author: Nick_Tarleton 05 January 2013 01:19:13AM 19 points [-]

I kept expecting someone to object that "this Turing machine never halts" doesn't count as a prediction, since you can never have observed it to run forever.

Comment author: earthwormchuck163 05 January 2013 09:50:42PM 3 points [-]

If you take this objection seriously, then you should also take issue with predictions like "nobody will ever transmit information faster than the speed of light", or things like it. After all, you can never actually observe the laws of physics to have been stable and universal for all time.

If nothing else, you can consider each as being a compact specification of an infinite sequence of testable predictions: "doesn't halt after one step", "doesn't halt after two steps",... "doesn't halt after n steps".

View more: Prev | Next