g_pepper comments on Second-Order Logic: The Controversy - Less Wrong

24 Post author: Eliezer_Yudkowsky 04 January 2013 07:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (188)

You are viewing a single comment's thread. Show more comments above.

Comment author: g_pepper 19 March 2015 11:24:16PM 0 points [-]

I agree with this:

There's no real semantic distinction between the original contents of the One Tape, or the finite contents of the Input Tape, or an arbitrarily complicated state-machine program, actually. You can build tape data for a Universal Turing Machine to simulate any other Turing machine.

and with this:

Real, physical computing devices, both biological and silicon, run coinductive (infinite-loop-until-it-isn't) programs all the time. Every operating system kernel, or message loop, or game loop is a coinductive program: its chief job is to iterate forever, taking a finite time to process I/O in each step. Each step performs some definite amount of semantic work, but there is an indefinite number of steps (generally: until a special "STOP NOW" input is given).

However, I don't see how you are going to be able to use these facts to solve the halting problem. I'm guessing that, rather than statically examining the Turing machine, you will execute it for some period of time and study the execution, and after some finite amount of time, you expect to be able to correctly state whether or not the machine will halt. Is that the basic idea?

Comment author: [deleted] 20 March 2015 11:04:15AM *  0 points [-]

I'm guessing that, rather than statically examining the Turing machine, you will execute it for some period of time and study the execution, and after some finite amount of time, you expect to be able to correctly state whether or not the machine will halt. Is that the basic idea?

Basically, yes. "Halting Empiricism", you could call it. The issue is precisely that you can't do empirical reasoning via deduction from a fixed axiom set (ie: a fixed, finite program halts x : program -> boolean). You need to do it by inductive reasoning, instead.

Comment author: RichardKennaway 20 March 2015 01:23:16PM 1 point [-]

You need to do it by inductive reasoning, instead.

What is inductive reasoning? Bayesian updating? Or something else?

Comment author: [deleted] 20 March 2015 05:41:24PM 0 points [-]

My reply to you is the same as my reply to g_pepper: it's easier for me to just do my background research, double-check everything, and eventually publish the full formalism than it is to explain all the details in a blog comment.

You are also correct to note that whatever combination of machine, person, input tape, and empirical data I provide, the X Machine can never solve the Halting Problem for the X Machine. The real math involved in my thinking here involves demonstrating the existence of an ordering: there should exist a sense in which some machines are "smaller" than others, and A can solve B's halting problem when A is "strictly larger" than B, possibly strictly larger by some necessary amount.

(Of course, this already holds if A has an nth-level Turing Oracle, B has an mth-level Turing Oracle, and n > m, but that's trivial from the definition of an oracle. I'm thinking of something that actually concerns physically realizable machines.)

Like I said: trying to go into extensive detail via blog comment will do nothing but confusingly unpack my intuitions about the problem, increasing net confusion. The proper thing to do is formalize, and that's going to take a bunch of time.

Comment author: g_pepper 20 March 2015 12:54:06PM 0 points [-]

This is interesting. Do you have an outline of how a person would go about determining whether a Turing machine will halt? If so, I would be interested in seeing it. Alternatively, if you have a more detailed argument as to why a person will be able to determine whether an arbitrary Turing machine will halt, even if that argument does not contain the details of how the person would proceed, I would be interested in seeing that.

Or, are you just making the argument that an intelligent person ought to be able in every case to use some combination of inductive reasoning, creativity, mathematical intuition, etc., to correctly determine whether or not a Turing machine will halt?

Comment author: [deleted] 20 March 2015 05:41:45PM 1 point [-]

My reply to you is the same as my reply to Richard Kennaway: it's easier for me to just do my background research, double-check everything, and eventually publish the full formalism than it is to explain all the details in a blog comment.

Comment author: g_pepper 20 March 2015 06:06:13PM 0 points [-]

Fair enough. I am looking forward to seeing the formalism!