Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Wei_Dai 01 July 2009 06:58:54PM 0 points [-]

Two models can behave the same as what you've seen so far, but diverge in future predictions. Which model should you give greater weight to? That's the question I'm asking.

Comment author: robzahra 02 July 2009 12:23:58AM 2 points [-]

The current best answer we know seems to be to write each consistent hypothesis in a formal language, and weight longer explanations inverse exponentially, renormalizing such that your total probability sums to 1. Look up aixi, universal prior

In response to Readiness Heuristics
Comment author: hrishimittal 15 June 2009 11:35:18AM *  0 points [-]

The True Trolley Dilemma would be where the child is Eliezer Yudkowsky.

Then what would you do?

EDIT: Sorry if that sounds trollish, but I meant it as a serious question.

Comment author: robzahra 16 June 2009 04:42:15PM 1 point [-]

Shutting up and multiplying, answer is clearly to save eliezer...and do so versus a lot more people than just three...question is more interesting if you ask people what n (probably greater than 3) is their cut off point.

Comment author: MorganHouse 08 May 2009 10:29:38PM 0 points [-]

I don't see any obvious reason why the answer to this question shouldn't be greater than the number of subatomic particles in your body.

Clarification: I am only talking about direct inputs to the decision making process, not what they're aggregated from (which would be the observable universe).

Comment author: robzahra 09 May 2009 01:40:48PM *  1 point [-]

Due to chaotic / non-linear effects, you're not going to get anywhere near the compression you need for 33 bits to be enough...I'm very confident the answer is much much higher...

Comment author: Vladimir_Nesov 27 April 2009 12:40:51PM 0 points [-]

No, you can't ask yourself what you'll do. It's like a calculator that seeks the answers to the question of "what is 2+2?" in a form "what will I answer to the question "what is 2+2"?", in which case the answer 57 will be perfectly reasonable.

If you are cooperating with your copy, you only know that the copy will do the same action, which is a restriction on your joint state space. Given this restriction, the expected utility calculation for your actions will return a result different from what other restrictions may force. In this case, you are left only with 2 options: (C,C) and (D,D), of which (C,C) is better.

Comment author: robzahra 27 April 2009 01:04:00PM *  0 points [-]

you're right. speaking more precisely, by "ask yourself what you would do", I mean "engage in the act of reflecting, wherein you realize the symmetry between you and your opponent which reduces the decision problem to (C,C) and (D,D), so that you choose (C,C)", as you've outlined above. Note though that even when the reduction is not complete (for example, b/c you're fighting a similar but inexact clone), there can still be added incentive to cooperate...

Comment author: cousin_it 27 April 2009 10:30:53AM *  -1 points [-]

Going slightly offtopic: Eliezer's answer has irked me for a long time, and only now I got a handle on why. To reliably win by determining whether the opponent one-boxes, we need to be Omega-superior relative to them, almost by the definition of Newcomb's. But such powers would allow us to just use the trivial solution: "cooperate if I think my opponent will cooperate".

Comment author: robzahra 27 April 2009 12:26:55PM 0 points [-]

Agreed that in general one will have some uncertainty over whether one's opponent is the type of algorithm who one boxes / cooperates / whom one wants to cooperate with, etc. It does look like you need to plug these uncertainties into your expected utility calculation, such that you decide to cooperate or defect based on your degree of uncertainty about your opponent.

However, in some cases at least, you don't need to be Omega-superior to predict whether another agent one-boxes....for example, if you're facing a clone of yourself; you can just ask yourself what you would do, and you know the answer. There may be some class of algorithms non-identical to you but which are still close enough to you to make this self-reflection increased evidence that your opponent will cooperate if you do.

Comment author: Nick_Tarleton 26 April 2009 11:36:54PM *  6 points [-]

Some little things:

  • "Professional field" should be multiple-choice.
  • What do you mean by "spiritual" under "religious views" – believe in the supernatural? take mysticism seriously in a way compatible with naturalism?
  • On p(Aliens), does "the Universe" mean past light cone, present surface of observable universe, or entire (potentially infinite) continuum? How about other Everett branches?
  • A definition of "supernatural" before the p(God) question would be nice.
  • "Three Worlds Ending" might benefit from a "clear preference for specific other outcome" option.
  • Similarly, at least some of the PD and other game theory/superrationality-related questions could have something like "different clear preferences depending on unspecified details of the situation".
Comment author: robzahra 27 April 2009 03:34:14AM *  1 point [-]

Agreed with tarleton, the prisoner's dilemma questions do look under-specified...e.g., eliezer has said something like cooperate if he thinks his opponent one-boxes on newcomb-like problems..maybe you could have some write-in box here and figure out how to map the votes to simple categories later, depending on the variety of survey responses you get

Comment author: robzahra 27 April 2009 03:27:38AM *  3 points [-]

On the belief in god question, rule out simulation scenarios explicitly...I assume you intend "supernatural" to rule out a simulation creator as a "god"?

Comment author: robzahra 27 April 2009 03:26:09AM *  3 points [-]

On marital status, distinguish "single and looking for a relationship" versus "single and looking for people to casually romantically interact with"

Comment author: Cameron_Taylor 24 April 2009 10:14:28PM 2 points [-]

Okay? Do whatever you want to do. If you know your expected value for your cryopreservation and and the expected value you have for the life-saving you could be doing with your organs then it's simple.

Eleizer's say so matters only in as much as he may be able to help with the math of translating your preferences into a coherent utility function.

Comment author: robzahra 24 April 2009 11:30:02PM *  2 points [-]

Seems worth mentioning: I think a thorough treatment of what "you" want needs to address extrapolated volition and all the associated issues that raises.
To my knowledge, some of those issues remain unsolved, such as whether different simulations of oneself in different environments necessarily converge (seems to me very unlikely, and this looks provable in a simplified model of the situation), and if not, how to "best" harmonize their differing opinions... similarly, whether a single simulated instance of oneself might itself not converge or not provably converge on one utility function as simulated time goes to infinity (seems quite likely; moreover, provable , in a simplified model) etc., etc.
If conclusive work has been done of which I'm unaware, it would be great if someone wants to link to it.
It seems unlikely to me that we can satisfactorily answer these questions without at least a detailed model of our own brains linked to reductionist explanations of what it means to "want" something, etc.

In response to comment by robzahra on Winning is Hard
Comment author: whpearson 05 April 2009 08:45:16PM 1 point [-]

My point is slightly different from NFL theorems. They say if you exhaustively search a problem then there are problems for the way you search that mean you will find the optimum last.

I'm trying to say there are problems where exhaustive search is something you don't want to do. E.g. seeing what happens when you stick a knife into your heart or jumping into a bonfire. These problems also exist in real life, where as the NFL problems are harder to make the case that they exist in real life for any specific agent.

In response to comment by whpearson on Winning is Hard
Comment author: robzahra 17 April 2009 09:52:31AM 0 points [-]

Wh- I definitely agree the point you're making about knives etc., though I think one intepretation of the nfl as applying not to just to search but also to optimization makes your observation an instance of one type of nfl. Admittedly, there are some fine print assumptions that I think go under the term "almost no free lunch" when discussed.

View more: Next