All of alanf's Comments + Replies

alanf00

If you read the quote carefully you will find that it is incompatible with the position you are attributing to Deutsch. For example, he writes about 

levels of universality between AI and ‘universal explainer/constructor’,

which would hardly be necessary if computational universality was equivalent to universal explainer.

2Charlie Steiner
That's a good point. It's still not clear to me that he's talking about precisely the same thing in both quotes. The point also remains that if you're not associating "understanding" with a class as broad as turing-completeness, then you can construct things that humans can't understand, e.g. by hiding them in complex patterns, or by using human blind spots.
1TAG
But that creates it's own problem: there's no longer a strong reason to believe in Universal Explanation. We don't know that humans are universal explainers, because if there is something a human can't think of ... well a human can't think of it! All we can do is notice confusion.
alanf10

The quotes aren't about Turing completeness. What you wrote is irrelevant to the quoted material.

2Charlie Steiner
I'm disagreeing with the notion, equivalent to taking turing completeness as understanding-universality, that the human capacity for understanding is the capacity for universal computation.
alanf10

We can't simulate things of which we currently have no understanding. But if at some point in the future we know how to write AGIs, then we would be able to simulate them. And if we don't know how to write AGIs then they won't exist. So if we can write AGIs in the future then memory capacity and processor speed won't impose a limit on our understanding. Any such limit would have to come from some other factor. So is there such a limit and where would it come from?

2jimrandomh
I think you're missing what the goal of all this is. LessWrong contains a lot of reasoning and prediction about AIs that don't exist, with details not filled in, because we want to decide which AI research paths we should and shouldn't pursue, which AIs we should and shouldn't create, etc. This kind of strategic thinking must necessarily be forward-looking, and based on incomplete information, because if it wasn't, it would be too late to be useful. So yes, after AGIs are already coded up and ready to run, we can learn things about their behavior by running them. This isn't in dispute, it's just not a solution to the questions we want to answer (on the timescales we need the answers).
alanf20

You said earlier:

But not all inductivists believe in a version of inductivism that supposedly generates theories or scientific knowledge.

What is the version of inductivism that generates no theories or scientific knowledge and what does it accomplish?

1TAG
There are many kinds of knowledge and learning that are useful but fall short of scientific knowledge. It is useful to any organism to learn from experience, and many can, even simple ones. There are many useful things learning algorithms can do. My cellphone has predictive text, which is based on learning: yours probably does too.
alanf20
But not all inductivists believe in a version of inductivism that supposedly generates theories or scientific knowledge.

That version of inductivism isn't in Li and Vitanyi who haven't even stated the problem described by critics of inductivism. Where is it?

1TAG
"Bacon's method is an example of the application of inductive reasoning. However, Bacon's method of induction is much more complex than the essential inductive process of making generalizations from observations. Bacon's method begins with description of the requirements for making the careful, systematic observations necessary to produce quality facts. He then proceeds to use induction, the ability to generalize from a set of facts to one or more axioms. " WP. But what is the point? Not many people are Baconians nowadays.
alanf10

I have corrected the spelling of Vitanyi.

alanf00

Science is not based on faith, nor on anything else. Scientific knowledge is created by conjecture and criticism. See Chapter I of "Realism and the Aim of Science" by Karl Popper.