What I'm really asking is, if some statement turns out to be undecidable for all of our Tarskian truth translation maps to models, does that make that conjecture meaningless, or is undecidable somehow distinct from unverifiable. What is the difference between believing "that conjecture is unverifiable" and believing "that conjecture is undecidable."? Are the expectations/restrictions on experience that those two believes offer identical? If so does that mean that the difference between those two believes is a syntactic issue?
See Making Beliefs Pay Rent :
http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
So, I should acquire additional terminal values so I can have higher absolute utility?
That's either wisdom or absurdity. It goes against my current model of rationality. But it seems to lead to winning, at least from the starting condition of having no values at all and thus not even being able to win or lose.
I guess it shouldn't be surprising that asking a question whose answer mystifies me leads to other questions that also mystify me. Maybe identifying a set of equivalent mysterious problems would be an advance.
As long as my average expected utility over all choices available goes up, I'm down to get more goals, and even loose old ones. But if my average expected utility goes down, then screw getting a new value. Though in general, adding a new value does not imply getting rid of an old one; as long as you keep all your old values there is no danger in adding a new one.