Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
jslocum140

I really like the idea overall.

Serious ideas:

  • games that help explain ideas like 'screening' variables, rules for propagating information up and down different branches of the network, etc.

  • more advanced topics like estimating the normalization constant for a very large hypothesis space?

  • more advanced gameplay mode where you have a scenario and a list of hidden and observable variables, and have to figure out what shape the network should take - you then play out the scenario with the network you made - success requires having constructed the network well!

Bad jokes:

  • A character named Gibbs who runs an energy drink stand and gives out free samples.

  • The Count of Monte Carlo should make an appearance.

  • A face-off against agents of the evil Frequentist, hell bent on destroying all that is (probably) good and (likely to be) held dear.

jslocum-30

Mathematics is a mental construct created to reliably manipulate abstract concepts. You can describe mathematical statements as elements of the mental models of intelligent beings. A mathematical statement can be considered "true" if, when an intelligent beings use the statement in their reasoning, their predictive power increases. Thus, " '4+4=8' is true" implies statements like "jslocum's model of arithmetic predicts that '4+4=8', which causes him to correctly predict that if he adds four carrots to his basket of four potatoes, he'll have eight vegetables in his basket"

I'm no sure that "use the statement in their reasoning" and "their predictive power increases" are well formed concepts, though, so this might need some refining.

jslocum10

Anecdotes are poisonous data, and it is best to exclude them from your reasoning when possible. They are subject to a massive selection bias. At best they are useful for inferring the existence of something, e.g. "I once saw a plesiosaur in Loch Ness.". Even then the inference is tenuous because all you know is that there is at least once individual who says they saw a plesiosaur. Inferring the existence of a plesiosaur requires that you have additional supporting evidence that assigns a high probability that they are telling the truth, that their memory has not changed significantly since the original event, and that the original experience was genuine.

jslocum20

Here is a spreadsheet with all the numbers for the Exercise example all crunched and the graph reasoning explained in a slightly different manner:

https://docs.google.com/spreadsheet/ccc?key=0ArkrB_7bUPTNdGhXbFd3SkxWUV9ONWdmVk9DcVRFMGc&usp=sharing

jslocum00

I find myself to be particularly susceptible to the pitfalls avoided by skill 4. I'll have to remember to explicitly invoke the Tarski method next time I find myself in the act of attempting to fool myself.

One scenario not listed here in which I find it particularly useful to explicitly think about my own map is in cases where the map is blurry (e.g. low precision knowledge: "the sun will set some time between 5pm and 7pm") or splotchy (e.g. explicit gaps in my knowledge: "I know where the red and blue cups are, but not the green cup"). When I bring my map's flaws explicitly into my awareness, it allows me to make plans which account for the uncertainty of my knowledge, and come up with countermeasures.

jslocum180

(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagrams and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.)

In the library of books of every possible string, close to "Harry Potter and the Methods of Rationality" and "Harry Potter and the Methods of Rationalitz" is "Harry Potter and the Methods of Rationality: Logically Consistent Edition." Why is the reality of that books' contents affected by your reticence to manifest that book in our universe?

jslocum30

I received an email on the 19th asking for additional information about myself. So I'm guessing that as of the 19th they were still not done selecting.

jslocum30

I've devised some additional scenarios that I have found to be helpful in contemplating this problem.

Scenario 1: Omega proposes Newcomb's problem to you. However, there is a twist: before he scans you, you may choose on of two robots to perform the box opening for you. Robot A will only open the $1M box; robot B will open both.

Scenario 2: You wake up and suddenly find yourself in a locked room with two boxes, and a note from Omega: "I've scanned a hapless citizen (not you). predicted their course of action, and placed the appropriate amount of money in the two boxes present. Choose one or two and then you may go"

In scenario 1, both evidential and causal decision theories agree that you should one-box. In scenario 2, they both agree that you should two-box. Now, if we replace the robots with your future self and the hapless citizen with your past self, S1 becomes "what should you do prior to being scanned by Omega" and S2 reverts to the original problem. So now, omitting the possibility of fooling Omega as negligible, it can be seen that maximizing the payout from Newcomb's problem is really about finding a way to cause your future self to one-box.

What options are available, to either rational agents or humans, for exerting causal power on their future selves? A human might make a promise to themselves (interesting question: is a promise a precommitment or a self-modification?), ask another person (or other agent) to provide disincentives for two-boxing (e.g. "Hey, Bob, I bet you I'll one-box. If I win, I get $1; if you win, you get $1M), or find some way of modifying the environment to prevent their future self from two-boxing (e.g. drop the second box down a well). A general rational agent has similar options: modify itself into something that will one-box, and/or modify the environment so that one-boxing is the best course of action for its future self.

So now we have two solutions, but can we do better? If rational agent 'Alpha' doesn't want to rely on external mechanisms to coerce it's future's behavior, and also does not want to introduce a hack into its source code, what general solution can it adopt that solves this general class of problem? I have not yet read the Timeless Decision Theory paper; I think I'll ponder this question before doing so, and see if I encounter any interesting thoughts.

jslocum10

It would be better to flip a coin at the beginning of a document to determine which pronoun to use when the gender is unspecified. That way there is no potential for the reader to be confused by two different pronouns referring to the same abstract entity.

Load More