by curi
1 min read5th Nov 20173 comments

0

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 10:45 PM

Knowledge is information error-corrected (adapted) to a purpose (problem).

No. Knowledge is just information. If you have some information how to solve a particular problem, it's still "just information".

There are no hard and fast rules about how error-corrected or to what

Those rules are just some information, some data. How "fast and hard" are they? When there is a perfect data about the fastest checking algorithm, then it's still "just data".

The field started coding too early and is largely wasting its time.

Perhaps. How do you know what people know and who is coding already, prematurely or not?

If you joined the field, I would recommend you do not code stuff.

I wouldn't give such an advice to everybody. I don't know what some people might know. Let them code, if they wish to.

Certain philosophy progress is needed before coding.

I agree, that you need some philosophy progress, you don't know if all others need that too. At least some may be completely unknown to you or to me.

good non-AGI work (e.g. alpha go zero, watson)

Isn't their coding premature as well?

which they hope will somehow generalize to AGI (it won't, though some techniques may turn out to be useful due to being good work and having reach)

I am not as sure as you are. They hope they will do something, you hope they will not. That's all.

wasting their time

Maybe you are a time waster Mr. Temple, yourself. Your claim that "coding AGI" is premature is just a guess. It's always possible that one is wrong, but saying "you people don't have the right theory, stop coding" ... is super-wrong. You don't know that. Nobody knows, what somebody else might know already.

people are super focused on predictions but not explanations.

A good prediction can only be done if you have a good theory/model about the mechanisms involved. So every decent predictor models anyway. The best predictor possible has a correct model. Which doesn't always imply that its predictions are right. Sometimes there isn't enough data for that. Even in principle. But to predict is to model!

some even deny there are non-empirical fields like philosophy

Some are dirty bastards also, and some have friends in low places and aunts in Australia. But you seem to imply, that all should share your view about "non-empirical fields like philosophy". Yeah, right.


It has been enough. At least my last remark I gave, was already unnecessary.

Your post reads to me as unfriendly, uncurious, and not really trying to make progress in resolving our disagreements. If I've misinterpreted and you'd like to do Paths Forward, let me know.

http://fallibleideas.com/paths-forward

Please, focus only on what has been said and not on how it has been said.

Now, there is a possibility that all is wrong from my side. Of course I think how right I am, but everybody thinks that anyway. Including this Temple guy with his "don't code yet"! I wonder what people here think about that.

One more disagreement perhaps. I do think that this AlphaGo Zero piece of code is an astonishing example of AI programming, but I have some deep doubts about Watson. It was great back then in 2011, but now they seem stuck to me.