ShardPhoenix comments on A Pragmatic Epistemology - Less Wrong

2 Post author: StephenR 05 August 2014 05:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: ShardPhoenix 05 August 2014 08:45:56AM *  1 point [-]

If the relevant behavior of the brain is computable (which seems likely to me), isn't there then a computable algorithm that does everything that you can do, if not better? I understand if you're objecting to overly simplistic models, but the idea that there is no one single (meta-)model that is most correct seems wrong in principle if not in present-day practice.

Comment author: Fhyve 05 August 2014 06:02:25PM 0 points [-]

The computable algorithm isn't a meta-model though. It's just you in a different substrate. It's not something the agent can run to figure out what to do because it necessarily take more computing power. And there is nothing preventing such a pragmatic agent from having a universe-model that is computable, considering finding a computable algorithm approximating itself, and copying that algorithm over and over.

Comment author: StephenR 05 August 2014 11:39:20PM 1 point [-]

I'm fine with agents being better at achieving their goals than I am, whether or not computational models of the brain succeed. We can model this phenomenon in several ways: algorithms, intelligence, resource availability, conditioning pressures, so on.

But "most correct" isn't something I feel comfortable applying as a blanket term across all models. If we're going to talk about the correctness (or maybe "accuracy," "efficiency," "utility," or whatever) of a model, I think we should use goals as a modulus. So we'd be talking about optimal models relative to this or that goal, and a most correct model would be a model that performs best relative to all goals. There isn't currently such a model, and even if we thought we had one it would only be best in the goals we applied it to. Under those circumstances there wouldn't be much reason to think that it would perform well under drastically different demands (i.e. that's something we should be very uncertain about).