paulfchristiano comments on Superintelligence 5: Forms of Superintelligence - Less Wrong

12 Post author: KatjaGrace 14 October 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (112)

You are viewing a single comment's thread. Show more comments above.

Comment author: KatjaGrace 14 October 2014 01:07:03AM 4 points [-]

As pointed out in note 14, humans can solve all computable problems, because they can carry out the steps of running a Turing machine (very slowly), which we know/suspect can do everything computable. It would seem then that a quality superintelligence is just radically faster than a human at these problems. Is it different to a speed superintelligence?

Comment author: SteveG 14 October 2014 01:37:30AM 1 point [-]

Humans cannot simulate a Turing machine because they are too inaccurate.

Comment author: paulfchristiano 14 October 2014 01:48:55AM 2 points [-]

If humans merely fail at any particular mechanical operation with 5% probability, then of course you could implement your computations in some form that was resistant to such errors. Even if you had a more complicated error pattern, where you might e.g. fail in a byzantine way during interval of power-law distributed length, or failed at each type of task with 5% probability (but would fail at the task every time if you repeated it), then it seems not-so-hard to implement turing machines in a robust way.

Comment author: SteveG 14 October 2014 02:03:23AM 1 point [-]

In one of my classes I emulated a Turing machine.

Based on that experience, I am going to say that a massive team of people would have a hard time with the task.

If you want to understand the limits of human accuracy in this kind of task, you can look at how well people do double-entry bookkeeping. It's a painfully slow and error-prone process.

Error rates are a fundamental element of intelligence, whether we are taking a standardized test or trying to succeed in a practical environment like administering health care or driving.

The theoretical point is interesting, but I am going to argue that error rates are fundamental to intelligence. I would like some help with the nuances.

Comment author: KatjaGrace 14 October 2014 02:10:12AM 1 point [-]

There may be a distinction to be made between an agent who could do any intellectual task if they carried out the right procedure, and an agent who can figure out for themselves which procedure to perform. While most humans could implement a turing machine of some kind if they were told how, and wanted to, it's not obvious they could arrange this from their current state.

Comment author: SteveG 14 October 2014 02:29:02AM 1 point [-]

That's a separate topic from error rate, which I still want help with, but also interesting.

Figuring out what procedure to perform is a kind of design task.

Designing includes:

-Defining goals and needs -Defining a space of alternatives -Searching a space of alternatives, hopefully with some big shortcuts -Possibly optimizing -Testing and iteration

Design is something that people fail at, over and over. They are successful enough of the time to build civilizations.

I feel that design is a fundamental element of quality and collective intelligence. I would love to sort through it in more detail.