Comment author: CCC 30 April 2014 01:37:43PM *  0 points [-]

A program that does A+B has to be more complicated than a program that does A alone, where A and B are two different, significant sets of problems to solve.

Incorrect. I can write a horrendously complicated program to solve 1+1; and a far simpler program to add any two integers.

Admittedly, neither of those are particularly significant problems; nonetheless, unnecessary complexity can be added to any program intended to do A alone.

It would be true to say that the shortest possible program capable of solving A+B must be more complex than the shortest possible program to solve A alone, though, so this minor quibble does not affect your conclusion.

Given 4-6 it is much less complicated to emulate hairyfigment's liberty-distinguishing faculty than to solve the strong AI problem.

Granted.

Given 7, it is unreasonable to postulate a world where we have solved the strong AI problem, in spades, so much so we have a vastly superhuman AI, but we still haven't solved the hairyfigment's liberty-distinguishing faculty problem.

Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.

Comment author: PhilosophyTutor 01 May 2014 12:07:14AM 0 points [-]

Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.

To clarify, it seems to me that modelling hairyfigment's ability to decide whether people have liberty is not only simpler than modelling hairyfigment's whole brain, but that it is also a subset of that problem. It does seem to me that you have to solve all subsets of Problem B before you can be said to have solved Problem B, hence you have to have solved the liberty-assessing problem if you have solved the strong AI problem, hence it makes no sense to postulate a world where you have a strong AI but can't explain liberty to it.

Comment author: hairyfigment 30 April 2014 03:06:45PM -1 points [-]

..It's the hidden step where you move from examining two fictions, worlds created to be transparent to human examination, to assuming I have some general "liberty-distinguishing faculty".

Comment author: PhilosophyTutor 01 May 2014 12:02:48AM 1 point [-]

We have identified the point on which we differ, which is excellent progress. I used fictional worlds as examples, but would it solve the problem if I used North Korea and New Zealand as examples instead, or the world in 1814 and the world in 2014? Those worlds or nations were not created to be transparent to human examination but I believe you do have the faculty to distinguish between them.

I don't see how this is harder than getting an AI to handle any other context-dependant, natural language descriptor, like "cold" or "heavy". "Cold" does not have a single, unitary definition in physics but it is not that hard a problem to figure out when you should say "that drink is cold" or "that pool is cold" or "that liquid hydrogen is cold". Children manage it and they are not vastly superhuman artificial intelligences.

Comment author: hairyfigment 30 April 2014 08:28:26AM -2 points [-]

No, this seems trivially false. No subset of my brain can reliably tell when an arbitrary Turing machine halts and when it doesn't, no matter how meaningful I consider the distinction to be. I don't know why you would say this.

Comment author: PhilosophyTutor 30 April 2014 12:10:26PM *  2 points [-]

I'll try to lay out my reasoning in clear steps, and perhaps you will be able to tell me where we differ exactly.

  1. Hairyfigment is capable of reading Orwell's 1984, and Banks' Culture novels, and identifying that the people in the hypothetical 1984 world have less liberty than the people in the hypothetical Culture world.
  2. This task does not require the full capabilities of hairyfigment's brain, in fact it requires substantially less.
  3. A program that does A+B has to be more complicated than a program that does A alone, where A and B are two different, significant sets of problems to solve. (EDIT: If these programs are efficiently written)
  4. Given 1-3, a program that can emulate hairyfigment's liberty-distinguishing faculty can be much, much less complicated than a program that can do that plus everything else hairyfigment's brain can do.
  5. If we can simulate a complete human brain that is the same as having solved the strong AI problem.
  6. A program that can do everything hairyfigment's brain can do is a program that simulates a complete human brain.
  7. Given 4-6 it is much less complicated to emulate hairyfigment's liberty-distinguishing faculty than to solve the strong AI problem.
  8. Given 7, it is unreasonable to postulate a world where we have solved the strong AI problem, in spades, so much so we have a vastly superhuman AI, but we still haven't solved the hairyfigment's liberty-distinguishing faculty problem.
Comment author: hairyfigment 30 April 2014 07:34:04AM -1 points [-]

While I don't know how much I believe the OP, remember that "liberty" is a hotly contested term. And that's without a superintelligence trying to create confusing cases. Are you really arguing that "a relatively small part of the processing power of one human brain" suffices to answer all questions that might arise in the future, well enough to rule out any superficially attractive dystopia?

Comment author: PhilosophyTutor 30 April 2014 08:07:48AM 3 points [-]

I really am. I think a human brain could rule out superficially attractive dystopias and also do many, many other things as well. If you think you personally could distinguish between a utopia and a superficially attractive dystopia given enough relevant information (and logically you must think so, because you are using them as different terms) then it must be the case that a subset of your brain can perform that task, because it doesn't take the full capabilities of your brain to carry out that operation.

I think this subtopic is unproductive however, for reasons already stated. I don't think there is any possible world where we cannot achieve a tiny, partial solution to the strong AI problem (codifying "liberty", and similar terms) but we can achieve a full-blown, transcendentally superhuman AI. The first problem is trivial compared to the second. It's not a trivial problem, by any means, it's a very hard problem that I don't see being overcome in the next few decades, but it's trivial compared to the problem of strong AI which is in turn trivial compared to the problem of vastly superhuman AI. I think Stuart_Armstrong is swallowing a whale and then straining at a gnat.

Comment author: Stuart_Armstrong 30 April 2014 04:55:07AM 0 points [-]

tell the AI not to take actions which the simulated brain thinks offend against liberty.

How? "tell", "the simulated brain thinks" "offend": defining those incredibly complicated concepts contains nearly the entirety of the problem.

Comment author: PhilosophyTutor 30 April 2014 06:28:16AM 1 point [-]

I could be wrong but I believe that this argument relies on an inconsistent assumption, where we assume we have solved the problem of creating an infinitely powerful AI, but we have not solved the problem of operationally defining commonplace English words which hundreds of millions of people successfully understand in such a way that a computer can perform operations using them.

It seems to me that the strong AI problem is many orders of magnitude more difficult than the problem of rigorously defining terms like "liberty". I imagine that a relatively small part of the processing power of one human brain is all that is needed to perform operations on terms like "liberty" or "paternalism" and engage in meaningful use of them so it is a much, much smaller problem than the problem of creating even a single human-level AI, let alone a vastly superhuman AI.

If in our imaginary scenario we can't even define "liberty" in such a way that a computer can use the term, it doesn't seem very likely that we can build any kind of AI at all.

Comment author: Stuart_Armstrong 29 April 2014 12:07:41PM 0 points [-]

I think if we can assume we have solved the strong AI problem, we can assume we have solved the much lesser problem of explaining liberty to an AI.

The strong AI problem is much easier to solve than the problem of motivating an AI to respect liberty. For instance, the first one can be brute forced (eg AIXItl with vast resources), the second one can't. Having the AI understand human concepts of liberty is pointless unless it's motivated to act on that understanding.

An excess of anthropomophisation is bad, but an analogy could be about creating new life (which humans can do) and motivating that new life to follow specific rules are requirements if they become powerful (which humans are pretty bad at at).

Comment author: PhilosophyTutor 29 April 2014 09:40:30PM *  4 points [-]

The strong AI problem is much easier to solve than the problem of motivating an AI to respect liberty. For instance, the first one can be brute forced (eg AIXItl with vast resources), the second one can't.

I don't believe that strong AI is going to be as simple to brute force as a lot of LessWrongers believe, personally, but if you can brute force strong AI then you can just get it to run a neuron-by-neuron simulation of the brain of a reasonably intelligent first year philosophy student who understands the concept of liberty and tell the AI not to take actions which the simulated brain thinks offend against liberty.

That is assuming that in this hypothetical future scenario where we have a strong AI we are capable of programming that strong AI to do any one thing instead of another, but if we cannot do that then the entire discussion seems to me to be moot.

Comment author: drnickbone 29 April 2014 10:18:15AM *  0 points [-]

This also creates some interesting problems... Suppose a very powerful AI is given human liberty as a goal (or discovers that this is a goal using coherent extrapolated volition). Then it could quickly notice that its own existence is a serious threat to that goal, and promptly destroy itself!

Comment author: PhilosophyTutor 29 April 2014 11:34:15AM 0 points [-]

I think Asimov did this first with his Multivac stories, although rather than promptly destroy itself Multivac executed a long-term plan to phase itself out.

Comment author: Stuart_Armstrong 29 April 2014 09:28:38AM *  0 points [-]

I don't think discreet but total control over a world is compatible with things like liberty

Precisely and exactly! That's the whole of the problem - optimising for one thing (appearance) results in the loss of other things we value.

which seem like obvious qualities to specify in an optimal world we are building an AI to search for.

Next challenge: define liberty in code. This seems extraordinarily difficult.

model of AI as an all-powerful genie capable of absolutely anything with no constraints whatsoever.

So we do agree that there are problem with an all-powerful genie? Once we've agreed on that, we can scale back to lower AI power, and see how the problems change.

(the risk is not so much that the AI would be an all powerful genie, but that it could be an all powerful genie compared with humans).

Comment author: PhilosophyTutor 29 April 2014 11:29:33AM 3 points [-]

Precisely and exactly! That's the whole of the problem - optimising for one thing (appearance) results in the loss of other things we value.

This just isn't always so. If you instruct an AI to optimise a car for speed, efficiency and durability but forget to specify that it has to be aerodynamic, you aren't going to get a car shaped like a brick. You can't optimise for speed and efficiency without optimising for aerodynamics too. In the same way it seems highly unlikely to me that you could optimise a society for freedom, education, just distribution of wealth, sexual equality and so on without creating something pretty close to optimal in terms of unwanted pregnancies, crime and other important axes.

Even if it's possible to do this, it seems like something which would require extra work and resources to achieve. A magical genie AI might be able to make you a super-efficient brick-shaped car by using Sufficiently Advanced Technology indistinguishable from magic but even for that genie it would have to be more work than making an equally optimal car by the defined parameters that wasn't a silly shape. In the same way an effectively God-like hypothetical AI might be able to make a siren world that optimised for everything except crime and create a world perfect in every way except that it was rife with crime but it seems like it would be more work, not less.

Next challenge: define liberty in code. This seems extraordinarily difficult.

I think if we can assume we have solved the strong AI problem, we can assume we have solved the much lesser problem of explaining liberty to an AI.

So we do agree that there are problem with an all-powerful genie?

We've got a problem with your assumptions about all-powerful genies, I think, because I think your argument relies on the genie being so ultimately all-powerful that it is exactly as easy for the genie to make an optimal brick-shaped car or an optimal car made out of tissue paper and post-it notes as it is for the genie to make an optimal proper car. I don't think that genie can exist in any remotely plausible universe.

If it's not all-powerful to that extreme then it's still going to be easier for the genie to make a society optimised (or close to it) across all the important axes at once than one optimised across all the ones we think to specify while tanking all the rest. So for any reasonable genie I still think market worlds don't make sense as a concept. Siren worlds, sure. Market worlds, not so much, because the things we value are deeply interconnected and you can't just arbitrarily dump-stat some while efficiently optimising all the rest.

Comment author: Stuart_Armstrong 28 April 2014 11:42:33AM 0 points [-]

The "no conception" example is just to illustrate that bad things happen when you ask an AI to optimise along a certain axis without fully specifying what we want (which is hard/impossible).

A marketing world is fully optimised along the "convince us to choose this world" axis. If at any point, the AI in confronted with a choice along the lines of "remove genuine liberty to best give the appearance of liberty/happiness", it will choose to do so.

That's actually the most likely way a marketing world could go wrong - the more control the AI has over people's appearance and behaviour, the more capable it is of making the world look good. So I feel we should presume that discrete-but-total AI control over the world's "inhabitants" would be the default in a marketing world.

Comment author: PhilosophyTutor 28 April 2014 09:03:29PM 3 points [-]

I think this and the "finite resources therefore tradeoffs" argument both fail to take seriously the interconnectedness of the optimisation axes which we as humans care about.

They assume that every possible aspect of society is an independent slider which a sufficiently advanced AI can position at will, even though this society is still going to be made up of humans, will have to be brought about by or with the cooperation of humans and will take time to bring about. These all place constraints on what is possible because the laws of physics and human nature aren't infinitely malleable.

I don't think discreet but total control over a world is compatible with things like liberty, which seem like obvious qualities to specify in an optimal world we are building an AI to search for.

I think what we might be running in to here is less of an AI problem and more of a problem with the model of AI as an all-powerful genie capable of absolutely anything with no constraints whatsoever.

Comment author: Stuart_Armstrong 28 April 2014 09:32:58AM 0 points [-]

If I only specify that I want low rates of abortion, for example,

You would get a world with no conception, or possibly with no humans at all.

Comment author: PhilosophyTutor 28 April 2014 11:21:16AM *  1 point [-]

I don't think you have highlighted a fundamental problem since we can just specify that we mean a low percentage of conceptions being deliberately aborted in liberal societies where birth control and abortion are freely available to all at will.

My point, though, is that I don't think it is very plausible that "marketing worlds" will organically arise where there are no humans, or no conception, but which tick all the other boxes we might think to specify in our attempts to describe an ideal world. I don't see how there being no conception or no humans could possibly be a necessary trade-off with things like wealth, liberty, rationality, sustainability, education, happiness, the satisfaction of rational and well-informed preferences and so forth.

Of course a sufficiently God-like malevolent AI could presumably find some way of gaming any finite list we give it, since there are probably an unbounded number of ways of bringing about horrible worlds, so this isn't a problem with the idea of siren worlds. I just don't find the idea of market worlds very plausible because so many of the things we value are fundamentally interconnected.

View more: Prev | Next