Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

My Fundamental Question About Omega

6 MrHen 10 February 2010 05:26PM

Omega has appeared to us inside of puzzles, games, and questions. The basic concept behind Omega is that it is (a) a perfect predictor and (b) not malevolent. The practical implications behind these points are that (a) it doesn't make mistakes and (b) you can trust its motives in the sense that it really, honestly doesn't care about you. This bugger is True Neutral and is good at it. And it doesn't lie.

A quick peek at Omega's presence on LessWrong reveals Newcomb's problem and Counterfactual Mugging as the most prominent examples. For those that missed them, other articles include Bead Jars and The Lifespan Dilemma.

Counterfactual Mugging was the most annoying for me, however, because I thought the answer was completely obvious and apparently the answer isn't obvious. Instead of going around in circles with a complicated scenario I decided to find a simpler version that reveals what I consider to my the fundamental confusion about Omega.

Suppose that Omega, as defined above, appears before you and says that it predicted you will give it $5. What do you do? If Omega is a perfect predictor, and it predicted you will give it $5... will you give it $5 dollars?

The answer to this question is probably obvious but I am curious if we all end up with the same obvious answer.

continue reading »

Too much feedback can be a bad thing

6 Kaj_Sotala 11 April 2009 02:05PM

Didn't have the time to read the article itself, but based on the abstract, this certainly sounds relevant for LW:

Recent advances in information technology make it possible for decision makers to track information in real-time and obtain frequent feedback on their decisions. From a normative sense, an increase in the frequency of feedback and the ability to make changes should lead to enhanced performance as decision makers are able to respond more quickly to changes in the environment and see the consequences of their actions. At the same time, there is reason to believe that more frequent feedback can sometimes lead to declines in performance. Across four inventory management experiments, we find that in environments characterized by random noise more frequent feedback on previous decisions leads to declines in performance. Receiving more frequent feedback leads to excessive focus on and more systematic processing of more recent data as well as a failure to adequately compare information across multiple time periods.

Hat tip to the BPS Resarch Digest.

ETA: Some other relevant studies from the same site, don't remember which ones have been covered here already:

Threat of terrorism boosts people's self-esteem

The "too much choice" problem isn't as straightforward as you'd think

Forget everything you thought you knew about Phineas Gage, Kitty Genovese, Little Albert, and other classic psychological tales

 

Harmful Options

23 Eliezer_Yudkowsky 25 December 2008 02:26AM

Previously in seriesLiving By Your Own Strength

Barry Schwartz's The Paradox of Choice—which I haven't read, though I've read some of the research behind it—talks about how offering people more choices can make them less happy.

A simple intuition says this shouldn't ought to happen to rational agents:  If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X.  So a rational agent shouldn't do worse by having more options.  The more available actions you have, the more powerful you become—that's how it should ought to work.

For example, if an ideal rational agent is initially forced to take only box B in Newcomb's Problem, and is then offered the additional choice of taking both boxes A and B, the rational agent shouldn't regret having more options.  Such regret indicates that you're "fighting your own ritual of cognition" which helplessly selects the worse choice once it's offered you.

But this intuition only governs extremely idealized rationalists, or rationalists in extremely idealized situations.  Bounded rationalists can easily do worse with strictly more options, because they burn computing operations to evaluate them.  You could write an invincible chess program in one line of Python if its only legal move were the winning one.

Of course Schwartz and co. are not talking about anything so pure and innocent as the computing cost of having more choices.

If you're dealing, not with an ideal rationalist, not with a bounded rationalist, but with a human being—

Say, would you like to finish reading this post, or watch this surprising video instead?

continue reading »