Posts

Sorted by New

Wiki Contributions

Comments

Great post! It clears up many of my misunderstandings and misconceptions I had after reading your last one.

I certainly have to agree, my concept of a implicit model is very problematic. And control systems are creepy!

Isn’t a model of the outside world built in – implicit – in the robot’s design? Surely it has no explicit knowledge of the outside world, yet it was built in a certain way so that it can counteract outside forces. Randomly throwing together a robot most certainly will not get you such a behaviour – but design (or evolution!) will give you a robot with a implicit model of the outside world (maybe at some point one who can formulate explicit models). I wouldn’t be so fast and just throw away the notion of a model.

I find the perspective very intriguing, but I think of it more as nature’s (or human designer’s) way of building quick and dirty, simple and efficient machines. To achieve that goal implicit models are very important. There is no magic – you need a model, albeit one that is implicit.

Why ask for political parties? Political views are complicated, if all you can do is pick a party this complexity is lost. All those not from the US (like myself) might additionally have a hard time picking a party.

Those are not easy problems to solve and it is certainly questionable if thinking of some more specific questions about political views and throwing them all together will get you meaningful reliable and valid results. As long as you cannot do better than that asking just for preferred political parties is certainly good enough.

Again, that is certainly true.

But the Calvinist who decides to live a sinful life visibly violates the rules. Even a dumb God who only sets up a simple list of rules and who parses your behaviour only according to that list would notice that. I have certainly no doubts about the possibility of finding holes in the rules of God so that you would certify as virtuous even if other humans would most likely not see you that way.

But as long as prediction is concerned there is no outwitting. If you find holes in the rules that is not the same as finding out where prediction models break down (and if God can predict perfectly there is no point where his model will break down). I think that’s an important distinction to make. You can certainly outwit the rules (if they have holes), you cannot outwit prediction (if it is perfect).

Yes, certainly, but that is besides the point. This problem here is about actually violating the rules and the question as to whether you can get away with it.

Can one say that the God of the Calvinists as well as the alien of Newcomb’s Problem have the ability to perfectly predict (at least specific things about) the future?

And isn’t having that ability exactly the same as having a crystal ball that actually can look into the future? Isn’t then being able to predict the future with 100 percent certainty the same as having the ability to actually look into the future? Then, I think, it might be possible to say that the God or alien cannot be outwitted. Anything you do – no matter what – has been correctly predicted or is actually seen in God’s or the alien’s crystal ball. If you two box the alien has predicted just that, if you are not virtuos God has predicted just that. Your brain cannot change that. Your brain cannot escape perfect prediction. All escape attempts will trigger God to throw you in hell and the alien to leave you with just $1000.

I think this is to an extent even true if God or the alien are wrong some of the time – if they are only able to predict the future accurately 99 percent of the time. One would only be able to outwit the alien if one where to know under which circumstances the predictions of the alien break down. And as long as we are talking about random failures to predict correctly, a God or alien that is 99 percent accurate has still an almost perfect ability to look into the future. Has this implications for our decisions if we know that someone else can predict our own decisions with some accuracy? I think we should strive to know others models of our decision making and know when these models break down. That could be useful.