You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

skeptical_lurker comments on The Unique Games Conjecture and FAI: A Troubling Obstacle - Less Wrong Discussion

0 Post author: 27chaos 20 January 2015 09:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread.

Comment author: skeptical_lurker 20 January 2015 10:02:18PM 7 points [-]

Thus, in trying to maximize human happiness, we are dealing with a problem that's essentially isomorphic to the UGC's coloring problem.

This strikes me as a very bold claim. Also, while I understand that some problems may have no solution, even approximately, except by brute force, these seem rare in that humans do actually manage to optimise things in real life.

Comment author: 27chaos 20 January 2015 11:30:13PM *  2 points [-]

Humans rarely optimize, or even approximate optimization. And, I think we only get as close as we do to optimal because we have had many years of evolution honing our motivations. An AI would be created much more quickly and in a larger mindspace. Well intended criteria can go awry and often do. A moral example of this would be the Repugnant Conclusion, and a logical example would be Newcomb's Paradox.

In those cases, we look at the anticipated outcomes of certain actions and judge those outcomes to be flawed, and so we can revisit our assumptions and tweak our criteria. But if the UGC is true, this tactic will not work in complex cases where there are many constraints. We have nothing to rely on to avoid subtler but equally suboptimal outcomes except our own intuitions about satisficing criteria, even though these intuitions are very weak. Simply hoping that no such subtle difficulties exist seems like a bad plan to me.

Comment author: skeptical_lurker 21 January 2015 10:16:24PM *  1 point [-]

I don't think the difficulties in adapting our moral intuitions into a self-consistent formal system (e.g. the Repugnant Conclusion) is a problem of insufficient optimisation power per se, its more the case that there are multiple systems within the brain (morality, intuitions, logic) and these are not operating in perfect harmony. This doesn't mean that each individual system is not working ok in its own way.

Surely designing an engine, or writing a novel, or (... you get the gist) are complex optimising problems with many constraints, and yet it seems that humans can at least approximately solve these problems far faster than brute-force trial and error.

AFAICT the UGC was saying that there exist insoluble problems, not that these problems are actually common. It seems to me like Godel's incompleteness theorem - there are statements which are true but which cannot be proved, but this doesn't mean that no statement can be proved, or that mathematics is pointless. At the end of the day regardless of whether or not the fundamental underpinning of mathematics are on shaky ground, or whether there are unprovble theorems and unsolvable problems, the actual mathematics that allows us to build aeroplanes works, and the planes fly.