timtyler comments on Open Thread September, Part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (203)
Does anyone else agree that, as a piece of expository writing, that document sucks bigtime?
111 pages! I got through about 25 and I was wondering why Eliezer thought I needed to hear about how his four friends had decided when presented with the Newcomb's soda problem and how some people refer to this problem as Solomon's problem. So, I decided to skim ahead until he started talking about TDT. So I skimmed and skimmed.
Finally, I got to section 14, entitled "The timeless decision procedure". "Aha!", I think. "Finally." The first paragraph consists of a very long and confusing sentence which at least seems to deal with the timeless decision procedure.
It might be easier to understand if expressed as an equation or formula containing, you know, variables and things. So I read on, hoping to find something I can sink my teeth into. But then the second paragraph begins:
and closes with
As far as I can tell, the remainder of this section entitled "The timeless decision procedure" consists of this justification, though not from first principles, but rather using an example. And it doesn't appear that Eliezer ever gets back to the task of providing a "formal presentation of a timeless decision algorithm".
So, I skip forward to the end, hoping to read the conclusions. Instead I find:
Followed by a bibliography containing one entry - A chapter from a 1978 collection of articles on applications of decision theory.
"...was cut off here ..."? Give me a break!
Let me know when you get it down to a dozen pages or so.
ETA: A cleaned up copy of the paper exists with a more complete bibliography and without the "manuscript was cut off here" closing.
I think this needs rewriting so it doesn't sound so circular - and only mentions the word "conditional" once.
It seems to me that we can just say that it maximises utility - while maintaining an awareness that there may be other agents running its decision algorithm out there, in addition to all the other things it knows.
I think the stuff about "conditional upon the abstract computation returning that output" is pretty-much implied by the notion of utility maximisation.