You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Consequences of the Non-Existence of Perfect Theoretical Rationality - Less Wrong Discussion

-1 Post author: casebash 09 January 2016 01:22AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 10 January 2016 12:43:58PM *  0 points [-]

Basically TDT is about the fact that we don't evaluate individual decisions for rationality but agents strategies. For every strategy that an agent has, other agents can punish the agent for having that strategy if they specifically choose to do so.

If there's for example a God/Matrix overlord who rewards people for believing in him even if the person has no rational reason to believe in him that would invalidate a lot of decision theories.

Basically decisions are entangled with the agent that makes them and not independent as you assume in your model. That entangelement forbids perfect decision making strategies that have the property independent of the world and whether other agents decide to specifically discriminate against certain strategies.

Comment author: casebash 10 January 2016 01:06:30PM 0 points [-]

Do you have a quote where Elizier says rationality doesn't exist? I don't believe that he'd say that. I think he'd argue that it is okay for rationality to fail when rationality is specifically being punished.

Regardless, I disagree with TDT, but that's a future article.

Comment author: ChristianKl 10 January 2016 01:20:50PM 0 points [-]

Do you have a quote where Eliezier says rationality doesn't exist?

No, but I don't think that he shares your idea of perfect theoretical rationality anyway. From that point there no need to say that it doesn't exist.

I think that it's Eliezer position that it's good enough that TDT provides the right answer when the other agent doesn't specifically choose to punish TDT.

But I don't want to dig up a specific quote at the moment.

Regardless, I disagree with TDT, but that's a future article.

You are still putting assumption about nonentanglement into your models. Questions about the actions of nonentangled actors are like asking how many angels can stand of the pin of a nail.

They are far removed from the kind of rationality of the sequences. You reason far away from reality. That's a typical problem with thinking too much in terms of hypotheticals instead of basing your reasoning on real world references.

Comment author: casebash 10 January 2016 01:25:19PM 0 points [-]

In order to prove that it is a problem, you have to find a real world situation that I've called incorrectly due to my decision to start reasoning from a theoretical model.

Comment author: ChristianKl 10 January 2016 01:35:24PM 0 points [-]

In general that's hard without knowing many real world situations that you have judged. But there's a recent example. The fact that different people won't come up with different definitions of knowledge if you ask them would be one recent example of concentrating too much on a definition and too little on actually engaging with what different people think.