Comment author: andreas 02 February 2013 05:42:44AM *  38 points [-]

"I design a cell to not fail and then assume it will and then ask the next 'what-if' questions," Sinnett said. "And then I design the batteries that if there is a failure of one cell it won't propagate to another. And then I assume that I am wrong and that it will propagate to another and then I design the enclosure and the redundancy of the equipment to assume that all the cells are involved and the airplane needs to be able to play through that."

Mike Sinnett, Boeing's 787 chief project engineer

Comment author: andreas 17 January 2012 03:51:18AM *  3 points [-]

The game theory textbook "A Course in Microeconomic Theory" (Kreps) addresses this situation. Quoting from page 516:

We will give an exact analysis of this problem momentarily (in smaller type), but you should have no difficulty seeing the basic trade-off; too little punishment, triggered only rarely, will give your opponent the incentive to try to get away with the noncooperative strategy. You have to punish often enough and harshly enough so that your opponent is motivated to play [cooperate] instead of [defect]. But the more often/more harsh is the punishment, the less are the gains from cooperation. And even if you punish in a fashion that leads you to know that your opponent is (in her own interests) choosing [cooperate] every time (except when she is punishing), you will have to "punish" in some instances to keep your opponent honest.

Comment author: lukeprog 24 April 2011 01:22:39AM 9 points [-]

Hmmmm. What do other people think of this idea?

I suspect one reason Eliezer did not do this is that when you make a long list of claims without any justification for them, it sounds silly and people don't pay attention to the rest of the sequence. But if you had first stepped them through the entire argument, they would have found no place at which they can really disagree. That's a concern, anyway.

Comment author: andreas 24 April 2011 01:31:29AM 5 points [-]

I am more motivated to read the rest of your sequence if the summary sounds silly than if I can easily see the arguments myself.

Comment author: andreas 24 April 2011 01:13:32AM *  9 points [-]

Back when Eliezer was writing his metaethics sequence, it would have been great to know where he was going, i.e., if he had posted ahead of time a one-paragraph technical summary of the position he set out to explain. Can you post such a summary of your position now?

Comment author: andreas 12 March 2011 09:29:18PM *  3 points [-]

Now, citing axioms and theorems to justify a step in a proof is not a mere social convention to make mathematicians happy. It is a useful constraint on your cognition, allowing you to make only inferences that are actually valid.

When you are trying to build up a new argument, temporarily accepting steps of uncertain correctness can be helpful (if mentally tagged as such). This strategy can move you out of local optima by prompting you to think about what further assumptions would be required to make the steps correct.

Techniques based on this kind of reasoning are used in the simulation of physical systems and in machine inference more generally (tempering). Instead of exploring the state space of a system using the temperature you are actually interested in, which permits only very particular moves between states ("provably correct reasoning steps"), you explore using a higher temperature that makes it easier to move between different states ("arguments"). Afterwards, you check how probable the state is that you moved to when evaluated using the original temperature.

Comment author: lukeprog 23 January 2011 05:05:37AM *  8 points [-]

The web usability guru is Jakob Nielsen.

One possible solution would be a button present on every page that would toggle hyperlinks. If pressed, all hyperlinks would disappear. If pressed again, hyperlinks would come back. A 'reading mode' toggle.

Comment author: andreas 24 January 2011 03:14:01AM 5 points [-]

As you wish: Drag the link on this page to your browser's bookmark bar. Clicking it on any page will turn all links black and remove the underlines, making links distinguishable from black plain text only through changes in mouse pointer style. Click again to get the original style back.

Comment author: cousin_it 30 November 2010 09:38:33AM *  16 points [-]

The formalist school of math philosophy thinks that meaningful questions have to be phrased in terms of finite computational processes. But if you try to write a program for determining the truth value of "this statement is false", you'll see it recurses and never terminates:

def f():
return (not f())

See also Kleene-Rosser paradox. This may or may not dissolve the original question for you, but it works for me.

There's more to be said about the paradox because it keeps turning up in many contexts. For example, see Terry Tao's posts about "no self-defeating object". Also note that if we replace "truth" with "provability", the liar's paradox turns into Godel's first incompleteness theorem, and Curry's paradox turns into Löb's theorem.

ETA: see also Abram Demski's explanation of Kripke's fixed point theory here on LW, if that's your cup of tea.

Comment author: andreas 30 November 2010 09:47:26AM 1 point [-]

See also: A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points, which treats the Liar's paradox as an instance of a generalization of Cantor's theorem (no onto mapping from N->2^N).

The best part of this unified scheme is that it shows that there are really no paradoxes. There are limitations. Paradoxes are ways of showing that if you permit one to violate a limitation, then you will get an inconsistent systems. The Liar paradox shows that if you permit natural language to talk about its own truthfulness (as it - of course - does) then we will have inconsistencies in natural languages.

In response to comment by [deleted] on Rationality is Not an Attractive Tribe
Comment author: XiXiDu 26 November 2010 08:38:20PM 0 points [-]

I like that attitude. It is also not irrational because you are aware of it and deliberately choose to be that way. I believe that Less Wrong features a way too much ought. I don't disagree with the consensus on Cryonics at all, yet I'm not getting a contract because I'm too lazy and I like to be lazy. My usual credo is, I can't lose as long as I don't leave my way. That doesn't mean I am stubborn. I allow myself to alter my way situational.

Rationality is about winning and what constitutes winning is purely subjective. If you don't care if the universe is tiled with paperclips rather than being filled with apes having sex under the stars, that is completely rational as long as you are aware what exactly you care or don't care about.

Comment author: andreas 27 November 2010 06:33:59AM 1 point [-]

Do you think that your beliefs regarding what you care about could be mistaken? That you might tell yourself that you care more about being lazy than about getting cryonics done, but that in fact, under reflection, you would prefer to get the contract?

Comment author: fiddlemath 06 November 2010 10:27:43PM *  2 points [-]

I'm working on my ph.d. in program verification. Every problem we're trying to solve is as hard as the halting problem, and so we make the assumption, essentially, that we're operating over real programs: programs that humans are likely to write, and actually want to run. It's the only way we can get any purchase on the problem.

Trouble is, the field doesn't have any recognizable standard for what makes a program "human-writable", so we don't talk much about that assumption. We should really get a formal model, so we have some basis for expecting that a particular formal method will work well before we implement it... but that would be harder to publish, so no one in academia is likely to do it.

Comment author: andreas 07 November 2010 12:07:22AM 2 points [-]

Similarly, inference (conditioning) is incomputable in general, even if your prior is computable. However, if you assume that observations are corrupted by independent, absolutely continuous noise, conditioning becomes computable.

Comment author: [deleted] 30 October 2010 01:36:09PM 4 points [-]

fair enough -- this is my caution against the logic "I can think of a risk, therefore we need to worry about it!" It seems that SIAI is making the stronger claim that unfriendliness is very likely.

My personal view is that AI is very hard itself, and that working on, say, a computer that can do what a mouse can do is likely to take a long time, and is harmless but very interesting research. I don't think we're anywhere near a point when we need to shut down anybody's current research.

Comment author: andreas 30 October 2010 07:53:08PM 4 points [-]

Consider marginal utility. Many people are working on AI, machine learning, computational psychology, and related fields. Nobody is working on preference theory, formal understanding of our goals under reflection. If you want to do interesting research and if you have the background to advance either of those fields, do you think the world will be better off with you on the one side or on the other?

View more: Next