hairyfigment comments on Siren worlds and the perils of over-optimised search - Less Wrong

27 Post author: Stuart_Armstrong 07 April 2014 11:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (411)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilosophyTutor 30 April 2014 12:10:26PM *  2 points [-]

I'll try to lay out my reasoning in clear steps, and perhaps you will be able to tell me where we differ exactly.

  1. Hairyfigment is capable of reading Orwell's 1984, and Banks' Culture novels, and identifying that the people in the hypothetical 1984 world have less liberty than the people in the hypothetical Culture world.
  2. This task does not require the full capabilities of hairyfigment's brain, in fact it requires substantially less.
  3. A program that does A+B has to be more complicated than a program that does A alone, where A and B are two different, significant sets of problems to solve. (EDIT: If these programs are efficiently written)
  4. Given 1-3, a program that can emulate hairyfigment's liberty-distinguishing faculty can be much, much less complicated than a program that can do that plus everything else hairyfigment's brain can do.
  5. If we can simulate a complete human brain that is the same as having solved the strong AI problem.
  6. A program that can do everything hairyfigment's brain can do is a program that simulates a complete human brain.
  7. Given 4-6 it is much less complicated to emulate hairyfigment's liberty-distinguishing faculty than to solve the strong AI problem.
  8. Given 7, it is unreasonable to postulate a world where we have solved the strong AI problem, in spades, so much so we have a vastly superhuman AI, but we still haven't solved the hairyfigment's liberty-distinguishing faculty problem.
Comment author: hairyfigment 30 April 2014 03:06:45PM -1 points [-]

..It's the hidden step where you move from examining two fictions, worlds created to be transparent to human examination, to assuming I have some general "liberty-distinguishing faculty".

Comment author: PhilosophyTutor 01 May 2014 12:02:48AM 1 point [-]

We have identified the point on which we differ, which is excellent progress. I used fictional worlds as examples, but would it solve the problem if I used North Korea and New Zealand as examples instead, or the world in 1814 and the world in 2014? Those worlds or nations were not created to be transparent to human examination but I believe you do have the faculty to distinguish between them.

I don't see how this is harder than getting an AI to handle any other context-dependant, natural language descriptor, like "cold" or "heavy". "Cold" does not have a single, unitary definition in physics but it is not that hard a problem to figure out when you should say "that drink is cold" or "that pool is cold" or "that liquid hydrogen is cold". Children manage it and they are not vastly superhuman artificial intelligences.

Comment author: TheAncientGeek 30 April 2014 03:49:05PM *  0 points [-]

H.airyfigment, do you canmean detecting liberty in reality is different to, or harder than, detecting liberty in fiction?