You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

LessWrong comments on Consequences of the Non-Existence of Perfect Theoretical Rationality - Less Wrong Discussion

-1 Post author: casebash 09 January 2016 01:22AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread.

Comment author: LessWrong 09 January 2016 10:54:41AM 4 points [-]

I might become a little bit more unpopular because of this, but I really want to say I find this to be rather abstract, and in fact something that I can't really apply to real life. That's the main problem with your ideas; you changed the universe and the laws in which things operate, and you removed the element of time. That's so fundamentally different from where I am that I would agree with everything you say, but coming back to my own mortal self the only thing I can think is "Nope".

But I'm not going to be be a complete and utter fucktard because you're really putting some effort into your posts and they're more interesting than Gleb's markov-chain-esque links, so I'll be a little more constructive.

What does this actually mean for the real world?

I'm quite confused by your fixation on "complete rational agent". The highest "value" is abstract. Let's give grades to decisions.

BAD DECISION (1) ----------------------------------- (100) GOOD DECISION

We can say that 100 is the "complete rational agent". But that doesn't mean the agent at 95 didn't make a spectacular decision. How much of a difference is between 100 to 95? I can't tell because we're at a high abstraction level.

That's where you go lower level and put on your giant glasses and see 1s and 0s for a good fifteen minutes. OK that was an exaggeration but still, we must expand on what makes it a good decision, and check out the building blocks which made the decision good.

There's also some sort of paradox here that I'm probably missing a crucial part of it, but if "perfect theoretical rationality" cannot be achieved, does that mean that the closest value to it is the new perfect theoretical rationality? But it can't be achieved; so it must be a smaller value, which also cannot be achieved, and so on. Then again, after a few of those we clearly are somewhat distant from "perfect theoretical rationality."

Is missing out on utility bad?

Wouldn't the rational agent postpone saying his desired utility rather than hope for a good enough number? If it's finite as you say, it can be measured, and we can say (suffering + 1). If it's infinite, well.. we've been there before, and you've stated it again in your example.

But this is an unwinnable scenario, so a perfectly rational agent will just pick a number arbitrarily? Sure you don't get the most utility, but why does this matter?

Again, infinity, different laws of universe, sudden teleportation to an instance of failure.. Neal Stephenson would love this stuff.

What other consequences are there?

f: maximum recursion depth reached

Comment author: casebash 09 January 2016 12:09:50PM 0 points [-]

"But if "perfect theoretical rationality" cannot be achieved, does that mean that the closest value to it is the new perfect theoretical rationality?"

Good question. No matter what number you pick someone else could have done a million times better, or a billion times better or a million, billion times better, so you are infinitely far from being perfectly rational.

"Wouldn't the rational agent postpone saying his desired utility rather than hope for a good enough number?"

The idea is that you don't know how much suffering there is in the universe and so no matter how large a number you picked, there could be more in which case you've lost.