Wes_W comments on prediction and capacity to represent - Less Wrong

-5 Post author: sbenthall 04 November 2014 06:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 04 November 2014 03:16:35PM 1 point [-]

I'm not clear on the distinction you're drawing. Can you give a concrete example? Of course, you could have a causal model of the internals which was wrong but gave the same answers as the right one, for the observations you are able to make. But it is not clear what a causal model of what you will see when you interact with the agent could fail to be a causal model, accurate or otherwise, of the agent's internals.

Comment author: Wes_W 04 November 2014 04:18:36PM 2 points [-]

I'm not clear on the distinction you're drawing. Can you give a concrete example?

I don't know how cars work, but almost nothing my car does can surprise me. Only unusual one-off problems require help from somebody who knows the internal structure.

But cars are designed to be usable by laypeople, so this is maybe an unfair example.

Comment author: sbenthall 08 November 2014 06:13:24AM 0 points [-]

You don't know anything about how cars work?

Comment author: Wes_W 10 November 2014 04:40:09AM 1 point [-]

I have a model of what inputs produce what outputs ("pressing on the gas pedal makes the engine go; not changing the oil every few months makes things break"). I do not have a causal model of the internals of the system.

At best I can make understandish-sounding noises about engines, but I could not build or repair one, nor even identify all but the most obvious parts.