What do you think? There might be a theoritical limitation to how much data an AI could collect without influencing the data itself and making its prediction redundant. Would this negate the idea of a 'God' AIand cause it to make suboptimal choices even with near limitless processing power?

New Comment
7 comments, sorted by Click to highlight new comments since:

Please don't just have an idea that would be cool and interesting if true, and post it to discussion. The ideas worth knowing about have at least some sort of argument that lends them plausibility.

+1

I have a theory that humans have a "good idea" detector - a black box they feed ideas into and get back "excellent!" or not. I surmise that this operates largely on its own, hence the effect where you have a brilliant idea in the middle of the night, write it down and wake up in the morning to realise it's rubbish.

Though I also think you can tune your brilliant idea detector, which may count against this theory.

But anyway, this theory suggests that the way to use its output is not merely to have the idea and find it appealing, but to then think about what it would imply and not imply. This will make for better discussion posts, particularly with a tough audience like this.

So, Demented - what would your idea imply and what would it not imply?

Thank you - but that's the wrong question to ask first! The first question is "why on Earth would you think that? What brings this hypothesis to your attention?"

Ah, I disagree - that's only the first question to ask if the person is used to having ideas. My human-simulator tells me that posts like this come from people who aren't used to having ideas - certainly not enough to throw them away - so are enormously taken with any they do come up with (c.f. Draco in HPMOR ch. 78 ) and are not used to the idea of robustness-testing of their ideas at all - they're still in the mode of thinking of them as aesthetic creations, and will not let go of them easily. Once they've got used to coming up with a few, they will then benefit from seeing what presumptions the ideas are coming from.

My aim here is to get them used to thinking of themselves as someone who can come up with ideas, rather than thinking that's a job for other people. (Noting there are cases where one really wishes they would leave coming up with ideas to other people, and encouraging the young always has its moments of gritting one's teeth at the same beginner's stupidity yet again.) If they're spewing out terrible beginner's ideas in disposable quantities, that's time to start some stringent culling mechanisms.

Again, the above is out of my human simulator, but I've spent some time on the difficult task of encouraging people to think of themselves as the sort of people who can do something rather than just the consumers of others' output, so I have slight experience of practical encouragement.

In general, distrust analogies to mathematical and physical principles. Slightly more specifically, distrust analogies to Heisenberg's Uncertainty Principle and Godel's Incompleteness Theorem.

If you're really interested, I have a vague recollection of a paper by David Wolpert that feels similar. See his vitae below. The physics and computation section should be of particular interest.

http://www.santafe.edu/media/staff_cvs/3cv.complete.fall.2010.pdf

Interesting fact in his vitae:

Top two winners of 2009 Netflix competition made extensive use of my patented Stacked Generalization technique.

[-][anonymous]00

Such a limit might exist. Are you going to hit it at any non-god level of optimization power? no.