You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MrMind comments on Open Thread, May 25 - May 31, 2015 - Less Wrong Discussion

3 Post author: Gondolinian 25 May 2015 12:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (301)

You are viewing a single comment's thread. Show more comments above.

Comment author: adamzerner 27 May 2015 05:06:31PM *  1 point [-]

I'm learning about Turing Machines for the first time. I think I get the gist of it, and I'm asking myself the question, "What's the big deal?". Here's my attempt at an answer:

  1. Consider the idea of Thingspace. Things are there components/properties. You could plot a point in Thingspace that describes everything about, say John Smith.

  2. You could encode that point in thingspace. Ie. you could create a code that says, "001010111010101001...1010101010101" represents point (42343,12312,11,343223423432423,...,123123123123) in Thingspace.

  3. A Turing Machine seems like it basically says, "If the state is 0001010101011...10101011, change it to this." It's looking at things at a really really low level - the level of individual bits. These bits are a map that, in theory, seem to actually represent the territory with perfect accuracy (or really, it's capable of doing so).

So a Turing Machine could:

a) Look at a model of reality on the lowest level possible.

b) Manipulate a model of reality on the lowest level possible using a).

So back to the original question - what's the big deal? The big deal seems to be that "When you operate on such a low level, you could 'do anything'. The model could be perfectly accurate, and you aren't limited to making coarse adjustments to the model."

To what extent is my understanding accurate? Can anyone elaborate?

EDIT: It seems somewhat obvious that "if you had such precise control, you'd be able to perfectly model reality and all of that". But to me, the hard parts seem to be:

1) Creating a physical machine that does this. That has enough memory, that computes things quickly enough, and that is wired to do things based on state. (hardware)

2) Giving the machine the right instructions. (software)

I sense that these initial impressions are ignorant of something though - I just don't know what.

Comment author: MrMind 28 May 2015 07:40:14AM 3 points [-]

You first need to realize that Turing machine were invented before the first computer was ever built, and they were born as a mathematical model, an ideal construction.
The problem at the time was that there were certain natural classifications of functions on natural numbers, the recursive functions, and the Turing machine model helped to understand that partial recursive functions = computable functions.
Computable, at the time, meant that a human being was able to calculate them with the aid of pen and paper.

Nowadays, depending on the branch of computer science you want to study, either recursive functions or Turing machines are used as the default general model of 'computability', either to study specializations of that concepts (complexity) or to show that some class of functions cannot be computed (Turing jumps, oracles, etc.)
You can think of them as idealized computer, and they are a big deal in the sense that they are the cornerstone upon which all computer science is built, the same way 'the continuum' is a big deal for calculus.