Comment author: Vladimir_M 02 December 2011 01:58:59AM *  26 points [-]

"Genius is 1 percent inspiration, 99 percent perspiration," said Thomas Edison, and he should've known: It took him hundreds of tweaks to get his incandescent light bulb to work well, and he was already building on the work of 22 earlier inventors of incandescent lights.

On the other hand, Nikola Tesla had this to say about Edison's methodology:

If Edison had a needle to find in a haystack, he would proceed at once with the diligence of the bee to examine straw after straw until he found the object of his search. [...] His method was inefficient in the extreme, for an immense ground had to be covered to get anything at all unless blind chance intervened... [...] I was almost a sorry witness of such doings, knowing that a little theory and calculation would have saved him ninety per cent of his labor.

Even allowing for a significant bias against Edison on Tesla's part, it does seem like he relied on perspiration to an extraordinary degree among high achievers. Of course, even that diligence wouldn't have been of much use if it hadn't come together with a very considerable talent.

More generally, there are two problems with the general message of this article:

  1. It is delusional for most people to believe that they can contribute usefully to really hard problems. (Except in trivial ways, like helping those who are capable of it with mundane tasks in order to free up more of their time and energy.) There is such a thing as innate talent, and doing useful work on some things requires an extraordinary degree of it.

  2. There is also a nasty failure mode for organized scientific effort when manpower and money are thrown at problems that seem impossibly hard, hoping that "hacking away at the edges" will eventually lead to major breakthroughs. Instead of progress, or even an honest pessimistic assessment of the situation, this may easily create perverse incentives for cargo-cult work that will turn the entire field into a vast heap of nonsense.

Comment author: alexflint 17 January 2012 06:58:22PM 8 points [-]

It is delusional for most people to believe that they can contribute usefully to really hard problems.

This seems more and more like the most damaging meme ever created on LessWrong. It persistently leads to people that could have made useful contributions (to AI safety) making no such contribution. Would it be a better world in which lots more people tried to contribute usefully to FAI and a small percentage succeeded? Yes, it would, even taking into account whatever cost the unsuccessful people pay.

Comment author: alexflint 04 January 2012 07:52:41AM 1 point [-]

Q: What is Baconmas?

Baconmas is a relatively new holiday, celebrated on January 22nd (the birthday of Sir Francis Bacon) to celebrate the sciences, with a side order of bacon. You should try it!

That is excellent! Simple, light-hearted, and to the point.

Comment author: soreff 22 December 2011 10:59:46PM 15 points [-]

Ow Ow Ow Ow

Weirdly enough, there is one prediction that looks like it panned out:

Repairing dental defects will also be revolutionized by the introduction of good, tough, and reliable polymers which will replace metallic amalgams. By the late 1990’s to early 2000’s biocompatible ceramics and coated polymers will be available that will allow for workable single tooth and multitooth gum-implanted prostheses.

It would have to be in the single least life-critical area.

A lot of those areas turned out to be intrinsically harder than anyone expected. Oncology, Alzheimer's...

One thing that I just cannot understand: We had semi-workable artificial hearts 30 years ago. Now, yes, it is hard to make surfaces biocompatible. Still, that has been accomplished in many cases. As a society, we are reasonably good at mechanical engineering. How come a quarter of us still lose our live to the failure of a pump? We hear all the time about global warming, and sustainable this and recyclable that, and sometimes about what NASA might do. Prioritizing any of those things ahead of a decent permanent artificial heart is crazy.

Comment author: alexflint 24 December 2011 11:35:50PM *  3 points [-]

I have three such ceramic implants. I remember having them put in over a simple half-hour operation, being awed by the amazing advances that medicine had made to allow me to carry on my life as if I hadn't knocked my teeth out at all. Little did I know that this was one of the only success stories of the last decade of medicine!

Comment author: Eugine_Nier 24 December 2011 06:24:30PM *  9 points [-]

Counterexample: the computer industry.

Comment author: alexflint 24 December 2011 11:31:34PM 4 points [-]

That's pretty much the only counterexample, though.

Comment author: alexflint 21 December 2011 11:07:57PM 0 points [-]

Fascinating case Yvain, thank you for writing this up.

So, what does Simon Browne add to the p-zombie debate?

Perhaps this case provides additional evidence that against the existence of (true) p-zombies. If a physical alteration to our brain can remove our experience of qualia, then this suggests again that qualia are just a by-product of a particular mental circuit.

Comment author: cousin_it 08 December 2011 03:48:13PM *  12 points [-]

Your post doesn't seem to have anything to do with P vs NP, it's just a statement of indignation at Gödel's incompleteness theorems :-)

Here's a simplified example. Imagine you have an axiomatic "model of reality" that describes computers. Then I can write a computer program that successively generates all valid proofs in that axiom system. If the program stumbles upon a valid proof that 1=0, it halts, otherwise it goes on searching. It seems to be a "prima facie substantive question" whether the program will halt or not, but our axiom system cannot settle it because it cannot prove its own consistency by Gödel's second incompleteness theorem, unless it is inconsistent.

The root of the problem, IMO, is our inability to describe integers. By the Löwenheim–Skolem theorem, you cannot have an axiomatization of the integers that rules out nonstandard integers. I wrote several posts about this.

Comment author: alexflint 09 December 2011 04:23:43PM 0 points [-]

A different perspective: Godel doesn't say that there is any particular question about reality that we cannot answer, only that however far into the model-building enterprise we get, there will always be some undecidable propositions, which can be translated into questions about reality with the TM-enumerating-sentences experiment. So if we have a model of reality M and it fails to answer a question about reality Q then there's always hope that we could discover further regularities in reality to amend M so that it answers Q, but there is no hope that we would ever be free of any open questions. Am I correct in thinking that this rules out the possibility of a GUT, at least if a GUT is defined as a model that answers all questions.

Comment author: JoshuaZ 09 December 2011 12:43:19AM *  8 points [-]

Now suppose that R was shown to be formally independent of ZFC in the sense that for some axiom A0, ZFC+A0 implies P=NP and ZFC+~A implies P!=NP. This would resolve the mathematical question of P versus NP

This is only true if A0 is independent of ZFC. This makes things unnecessarily complicated and obscures how one would usually prove that something is independent. There are a variety of methods of showing that something is independent, but the most common method is to construct two models of the theory, one with the statement and the other with the negation. If both models are contained in your original system then you know that as long as your original system is consistent, your desired statement is independent. A more concrete example that avoids a lot of the subtleties and abstractions is what happened with the parallel postulate in the 19th century. By making slightly other geometries (such as geometries on the surface of a sphere), one could do the exact same process as above.

All this adds up to: The P versus NP problem (and questions like it that can be phrased as definitive questions about reality) must have an answer unless our model of reality is incomplete

I think you may be confusing reality with our models here. Consider for example the possibility that our universe is actually discrete and finite. If that's the case, then a decent model won't answer whether P != NP or not in the abstract sense.

In general, when a specific question is being asked it helps to try to put in the less abstract version and see if anything changes. In this context, what do you think happens if we replace P ! = NP with some more concrete question? Say for example I want to know if 3^^^^^^3 + 1 has an even or odd number of prime factors. This is at least more concrete in that you can specify a specific computation that if you could do it you could then answer this question. I don't know of any easy way to answer this sort of question but it looks really difficult. It may well be that this question is simply unresolvable in our universe because the computational resources to answer it don't exist. But from the perspective of something like ZFC this question is trivial. This suggests to me that there are subtle issues going on here that you aren't quite addressing. P != NP is a particularly tricky question because there are so many options for what could happen that are logically consistent but seem weird (e.g. there's an algorithm that solves 3-SAT in polynomial time but this can't be proven in ZFC. Or the algorithm's correctness can be proven but not a polynomial bound on its run time. Or the run time can be proven but not the correctness of the algorithm. Etc.)

How issues like undecidability and our modeling of reality interact are really tough. It isn't helpful to jump in with them using an example that is itself really abstract.

All of that said, there's an overview by Scott Aaronson on whether P != NP is undecidable that is worth reading(pdf). He does also discuss towards the end some of the issues you are touching on.

Comment author: alexflint 09 December 2011 02:48:11PM *  0 points [-]

I think you may be confusing reality with our models here.

Yeah my claim was a little ambiguous. I meant to claim that either (1) our current model of reality fails to describe some truths about the universe or (2) P=NP is decidable in our model. [I'm only clarifying the claim, I'm now dubious about whether this it is true.] You're right- I should add (3) P=NP cannot be cast as a question about reality.

Comment author: paulfchristiano 09 December 2011 02:33:32AM *  0 points [-]

Then the first step is that A asks what happens if its next output is (say) 0. To do that it needs to run H to produce the next bit of output. But running H involves running a simulation of A, and inside that simulation the exact same situation arises, namely that sim(A) considers various outputs that it might make and runs simulations of the world, resulting in another level of recursion to sim(sim(A)), and so on in an infinite loop.

This seems to be the observation that you can't have a Turing machine that implements AIXI. An approximate AIXI is not going to be able to simulate itself.

Is it possible to make Model 2 just slightly simpler by somehow leveraging the "free" information on the output tape?

I don't think this is possible, although it is an interesting thought. The main issue is that before you get to leverage the first N bits of AIXI's output you have to also explain the first N bits of AIXI's input, which seems basically guaranteed to wash out the complexity gains (because all of the info in the first N bits of AIXI's output was coming from the first N bits of AIXI's input).

Comment author: alexflint 09 December 2011 09:36:10AM 1 point [-]

This seems to be the observation that you can't have a Turing machine that implements AIXI. An approximate AIXI is not going to be able to simulate itself.

Yes, I guess you're right. But doesn't this also mean that no computable approximation of AIXI will ever hypothesize a world that contains a model of itself, for if it did then it will go into the infinite loop I described. So it seems the problem of Model 2 will never come up?

The main issue is that before you get to leverage the first N bits of AIXI's output you have to also explain the first N bits of AIXI's input

Not sure I'm understanding you correctly but this seems wrong. AIXI conditions on all its outputs so far, right? So if the world is a bit-repeater then one valid model of the world is literally a bit repeater, which explains the inputs but not the outputs.

Comment author: alexflint 09 December 2011 02:19:51AM *  0 points [-]

Voted up for being an insightful observation.

I think the core issue arises when A locates a model of the world that includes a model of A itself, thus explaining away the apparent correlation between the input and output tapes. I don't have a watertight objection to your argument, but I'm also not convinced that it goes through so easily.

Let's stick to the case where A is just a perfectly ordinary Turing approximation of AIXI. It seems to me that it's still going to have quite some difficulty reasoning about its own behaviour. In particular, suppose A locates a hypothesis H="the world consists of a <model of A> connected to a <model of world> and my outputs are irrelevant". Then the first step is that A asks what happens if its next output is (say) 0. To do that it needs to run H to produce the next bit that it expects to receive from the world. But running H involves running a simulation of A, and inside that simulation the exact same situation arises, namely that sim(A) considers various outputs that it might make and then runs simulations of its inferred model of the world, which themselves contain models of A, resulting in another level of recursion to sim(sim(A)), and so on in an infinite loop. Actually, I don't know what AIXI does about Turing that fail to produce output...

A different, perhaps weaker objection is that AIXI conditions on its outputs when performing inference, so they don't count towards the "burden of explanation". That doesn't resolve the issue you raise but perhaps this does: Is it possible to make Model 2 just slightly simpler by somehow leveraging the "free" information on the output tape? Perhaps by removing some description of some initial conditions from Model 2 and replacing that with a function of the information on the output tape. It's not clear that this is always possible but it seems plausible to me.

Comment author: cousin_it 08 December 2011 03:48:13PM *  12 points [-]

Your post doesn't seem to have anything to do with P vs NP, it's just a statement of indignation at Gödel's incompleteness theorems :-)

Here's a simplified example. Imagine you have an axiomatic "model of reality" that describes computers. Then I can write a computer program that successively generates all valid proofs in that axiom system. If the program stumbles upon a valid proof that 1=0, it halts, otherwise it goes on searching. It seems to be a "prima facie substantive question" whether the program will halt or not, but our axiom system cannot settle it because it cannot prove its own consistency by Gödel's second incompleteness theorem, unless it is inconsistent.

The root of the problem, IMO, is our inability to describe integers. By the Löwenheim–Skolem theorem, you cannot have an axiomatization of the integers that rules out nonstandard integers. I wrote several posts about this.

Comment author: alexflint 09 December 2011 12:36:35AM 2 points [-]

Good point, evidently I failed to really internalize Godel. I had dismissed Godel sentences as not questions about reality but your example is compelling.

Interestingly, your post on integers seemed to suggest you were also thinking that since our models of integers fail to live up to expectations we've somehow failed to describe them, but that it might yet be possible to do so.

View more: Prev | Next