Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Apr. 24 - Apr. 30, 2017

0 gilch 24 April 2017 07:43PM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

In response to comment by gilch on Cheating Omega
Comment author: WalterL 23 April 2017 06:04:25AM 1 point [-]

I'm simplifying, but I don't think it's really strawmanning.

There exists no procedure that the Chooser can perform after Omega sets down the box and before they open it that will cause Omega to reward a two boxer or fail to reward a one boxer. Not X-raying the boxes, not pulling a TRUE RANDOMIZER out of a portable hole. Omega is defined as part of the problem, and fighting the hypothetical doesn't change anything.

He correctly rewards your actions in exactly the same way that the law in Prisoner's Dilemma hands you your points. Writing long articles about how you could use a spoon to tunnel through and overhear the other prisoner, and that if anyone doesn't have spoons in their answers they are doing something wrong...isn't even wrong, it's solving the wrong problem.

What you are fighting, Omega's defined perfection, doesn't exist. Sinking effort into fighting it is dumb. The idea that people need to 'take seriously' your shadow boxing is even more silly.

Like, say we all agree that Omega can't handle 'quantum coin flips', or, heck, dice. You can just repose the problem with Omega2, who alters reality such that nothing that interferes with his experiment can work. Or walls that are unspoonable, to drive the point home.

In response to comment by WalterL on Cheating Omega
Comment author: gilch 24 April 2017 12:11:01AM *  0 points [-]

Writing long articles about how you could use a spoon to tunnel through and overhear the other prisoner, and that if anyone doesn't have spoons in their answers they are doing something wrong...

Another strawman. Strawman arguments may work on some gullible humans, but don't expect it to sway a rationalist.

You can just repose the problem with Omega2, who alters reality

You're not being very clear, but it sounds like you're assuming a contradiction. You can't assert that Omega2 both does and does not alter the reality of the boxes after the choice. If you allow a contradiction you can do whatever you want, but it's not math anymore. We're not talking about anything useful. Making stuff up with numbers and the constraint of logic is math. Making stuff up with numbers and no logic is just numerology.

not pulling a TRUE RANDOMIZER out

I think this is the crux of your objection: I think agents based on real-world physics are the default, and an agent - QRNG (quantum random number generator) problem is an additional constraint. A special case. You think that classical-only agents are the default, and classical + QRNG is the special case.

Recall how an algorithm feels from the inside. Once we know all the relevant details about Pluto, you can still ask, "But it really a planet?". But at this point understand we're not talking about Pluto. We're talking about our own language. Thus which is really the default should be irrelevant. We should be able to taboo "planet", use alternate names, and talk intelligently about either case. But recall the OP specifically assumes a QRNG:

This is regardless of how well Omega can predict your choice. Given quantum dice, Newcomb's problem is not Newcomblike.

While it's a useful intuition pump, the above argument doesn't appear to require the Many Worlds interpretation to work. (Though Many Worlds is probably correct.) The dice may not even have to be quantum. They just have to be unpredictably random.

The qualification "given quantum dice" is not vacuous. A simple computer algorithm isn't good enough against a sufficiently advanced predictor. Pseudorandom sequences can be reproduced and predicted. The argument requires actual hardware.

Pretending that I didn't assume that, when I specifically stated that I had, is logically rude.

What you are fighting, Omega's defined perfection, doesn't exist. Sinking effort into fighting it is dumb. The idea that people need to 'take seriously' your shadow boxing is even more silly.

Why do we care about Newcomblike problems? Because they apply to real-world agents. Like AIs. It's useful to consider.

Omniscience doesn't exist. Omega is only the limiting case, but Newcomblike reasoning applies even in the face of an imperfect predictor, so Newcomblike reasoning still applies in the real world. QRNGs, do exist in the real world, and IF your decision theory can't account for them, and use them appropriately, then it's the wrong decision theory for the real world. classical + QRNG is useful to think about. It isn't being silly to ask other rationalists to take it seriously, and I'm starting to suspect you're trolling me here.

But we should be able to talk intelligently about the other case. Are there situations where it's useful to consider agent - QRNG? Sure, if the rules of the game stipulate that the Chooser promises not to do that. That's clearly a different game than in the OP, but perhaps closer to the original formulation in the paper that g_pepper pointed out. In that case, you one-box. We could even say that Omega claims to never offer a deal to those he cannot predict accurately. If you know this, you may be motivated to be more predictable. Again, a different game.

But can it look like the game in the OP to the Chooser? Can the Chooser think it's in classical + QRNG, when, in fact, it is not? Perhaps, but it's contrived. It is unrealistic to think a real-world superintelligence can't build a QRNG, given access to real-world actuators. But if you boxed the AI Chooser in a simulated world (denying it real actuators), you could provide it with a "simulated QRNG", that is not, in fact, quantum. Maybe you generate a list of numbers in advance, then you could create a "simulated Omega" that can predict the "simulated QRNG" due outside-the-box information, but of course, not a real one.

But this isn't The Deal either. This is isomorphic to the case where Omega cheats by loading your dice to match his prediction after presenting the choice (or an accomplice does this for him), thus violating The Deal. The Chooser must choose, not Omega, or there's no point. With enough examples the Chooser may suspect it's in a simulation. (This would probably make it much less useful as an oracle, or more likely to escape the box.)

In response to Cheating Omega
Comment author: Good_Burning_Plastic 20 April 2017 08:23:37AM 1 point [-]

Now, I've pre-committed that after Omega offers me The Deal, I'll make two quantum coin flips. If I get two tails in a row, I'll two-box. Otherwise, I'll one-box.

Omega predicted that and put the large box in a quantum superposition entangled with those of the coins, such that it will end up containing $1M if you get at least a head and containing an equal mass of blank paper otherwise.

Comment author: gilch 21 April 2017 11:00:27PM 0 points [-]

Interesting. Why the equal mass? Omega would need Schrodinger's box, that is, basically no interaction with the outside world lest it decohere. I'm not sure you could weigh it. Still, quantum entanglement and superposition are real effects that may have real-world consequences for a decision theory.

We can inflate a quantum event to macroscopic scales like with Schrodinger's cat. You have vial of something reactive in the box to destroy the money, and a hammer triggered by a quantum event.

But isn't that altering The Deal? If Omega is allowed to change the contents of the box after your choice, then it's no longer a Newcomblike problem and just an obvious quid pro quo that any of the usual decision theories could handle.

I'm not sure I understand the setup. Can you cause entanglement with the coins in advance just by knowing about them? I thought it required interaction. I don't think Omega is allowed that access, or you could just as easily argue that Omega could interact with the Chooser's brain to cause the predicted choice. Then it's no longer a decision; it's just Omega doing stuff.

In response to comment by gilch on Cheating Omega
Comment author: Lumifer 21 April 2017 08:40:02PM *  0 points [-]

I am trying to say that you use words in a careless and imprecise manner.

I also don't "believe" in Many Worlds, though since there are guaranteed to be no empirical differences between the MWI and Copenhagen, I don't care much about that belief: it pays no rent.

In response to comment by Lumifer on Cheating Omega
Comment author: gilch 21 April 2017 09:22:19PM *  0 points [-]

And the winner is

you use words in a careless and imprecise manner

(The pot calls the kettle black.) Natural languages like English are informal. Some ambiguity can't be helped. We do the best we can and ask clarifying questions. Was there a question in there?

guaranteed to be no empirical differences

Assuming Omega's near-omniscience, we just found one! Omega can reliably predict the outcome of a quantum coin flip in a Copenhagen Universe, (since he knows the future), but can't "predict" which branch we'll end up in given a Many Worlds Multiverse, since we'll be in both. (He knows the futures, but it doesn't help.)

So let's not assume that. Now we can both agree Omega is unrealistic, and only useful as a limiting case for real-world predictors. Since we know there's no empirical difference between interpretations, it follows that any physical approximation of near-omniscience can't predict the outcome of quantum coin flips. My strategy still works.

In response to comment by gilch on Cheating Omega
Comment author: Oscar_Cunningham 20 April 2017 10:30:09AM 0 points [-]

Sure, let's say Omega calculates the probability that you two-box and removes that proportion of the money from box A. Then your optimal strategy is to one-box with as much probability as you can.

Comment author: gilch 21 April 2017 08:54:31PM 0 points [-]

I think that does follow, but you're altering The Deal. This is a different game.

The only thing Omega is allowed to do is fill the box, or not, in advance. As established in the OP, however, Omega can reduce the expected value by predicting less accurately. But over multiple games, this tarnishes Omega's record and makes future Choosers more likely to two-box.

Comment author: Oscar_Cunningham 20 April 2017 02:15:16PM 0 points [-]

The probabilities are based on Omega's state of knowledge. The original problem assumes that Omega is near-omniscient, so that he is extremely likely to make a correct prediction. If you assume that it's possible at all to make a random choice then you must have some "hidden" source of information that Omega can't see. Otherwise the strategy in the original post wouldn't even work, Omega would know how your "random" choice was going to come out so every time you two boxed you would find the box empty and vice-versa.

So when I said "probability" I meant the probability as judged by Omega based on his near total knowledge of your brain and your environment, but with no knowledge of some source of randomness that you can use to generate decisions.

Comment author: gilch 21 April 2017 08:49:31PM *  0 points [-]

Many Worlds is deterministic. What relevant information is hidden? Omega can predict with certainty that both outcomes happen in the event of a quantum coin flip, in different Everett branches. This is only "random" from a subjective point of view, after the split. Yet given the rules of The Deal, Omega can only fill the box, or not, in advance of the split.

In response to comment by gilch on Cheating Omega
Comment author: WalterL 21 April 2017 07:06:14PM 0 points [-]

This shouldn't be tough. He gives you the box. You flip a coin. You open or don't. He saw that coming. You get what he gave you.

Fancy talk doesn't change his ability to know what you are gonna do. You might as well say that another version of you had a heart attack before they could open any boxes, so your plan is bad as say that another version of you tricked Omega so your plan is good.

In response to comment by WalterL on Cheating Omega
Comment author: gilch 21 April 2017 08:20:06PM 0 points [-]

Consider that down voted. You're totally strawmanning. You're not taking this seriously, and you're not listening, because you're not responding to what I actually said. Did you even read the OP? What are you even talking about?

In response to comment by gilch on Cheating Omega
Comment author: Lumifer 20 April 2017 01:53:48AM 1 point [-]

Two existing futures

Ain't no such thing.

In response to comment by Lumifer on Cheating Omega
Comment author: gilch 21 April 2017 08:16:12PM 0 points [-]

Consider that down voted. It's too ambiguous. I can't tell what you're trying to say. Are you just nitpicking that both worlds have the same value on the t axis? Are you just signaling that you don't believe in many worlds? Is there some subtlety of quantum mechanics I missed, you'd like to elaborate on? Are you just saying there's no such thing as randomness?

Comment author: eternal_neophyte 18 April 2017 05:59:51PM 0 points [-]

If I may take a stab at this: it's probably a combination of 1) Costs a lot 2) Benefit isn't expected for many decades 3) No guarantee that it would work

Anyone taking a heurisitc approach to reasoning about whether to sign up for cryonics rather than a probabilistic one ( which isn't irrational if you have no way to estimate the probabilities involved available to you ) could therefore easily evaluate it as not worth doing.

Comment author: gilch 21 April 2017 02:59:39AM 0 points [-]

The religious might also see it as an attempt to cheat God, which rarely ends well in the mythology.

Comment author: gilch 18 April 2017 06:22:15PM 1 point [-]

It seems to be a common desire around here. See the akrasia tactics thread. I started reading Mini Habits after seeing it recommended there. The technique looks promising for your problem.

Comment author: gilch 21 April 2017 02:46:00AM 4 points [-]

I finished the book. It's not that long. I'll try to summarize the thesis.

Your capacity to work is based on three forces: motivation, willpower, and habit. Motivation is too unreliable: sometimes you have it, sometimes you don't. Habits are those behaviors that are easier to do than to not do; habits are the most reliable. But they have a chicken and egg problem. You can't use a habit you don't have. Willpower is the most useful, but you have a very limited supply of it; when willpower is overtaxed you can't use it until it recharges enough.

The mistake of most of the self help genre is to focus on motivation. Forget about motivation. You can't control it reliably. You should instead focus on willpower, but considering its limited supply, you must spend it efficiently by bootstrapping just a few habits at a time. Make a daily goal of "stupid simple" positive behaviors you can accomplish with little appreciable effort, that you can FORCE yourself to do even at the last minute, with a headache, while sleep deprived. The deadline is when you fall asleep. Something like reading just two pages of a book, or writing fifty words, or a single push up. If those sound too hard, think of something even easier. Maybe you just open the book. Maybe you just write a single word.

Your abstract goals may be lofty, but your concrete goals must be humble. When you've established a framework of habit, you are free to surf the waves of motivation to do "bonus reps". Read more pages, write more words, do more push ups. But only when you feel like it. It's very important psychologically to count the stupid simple behavior alone as a success. Because you've maintained the habit. Often the hardest part of work is starting. Your mini habit will set you in motion. At that point it's often easier to keep moving. Over time you'll entrench the habit, build willpower by exercising it, and accumulate some real accomplishment.

Once it's a real habit (i.e. easier to do than to not do), then it's no longer costing willpower and you try to add another one.

There are other details in the book. (And parts of it are probably worthless.) What the mindset looks like. How to avoid certain common failure modes. A particularly important one is about breaking your streak. If you accidentally miss a day, it can be very discouraging. Building a habit is like riding a bike up a hill. It's harder to do than to not do, until you reach the top. Don't think of a missed day as a broken link in the chain. Think of it as sliding down the hill, but not all the way down the hill. You've lost progress, but not all progress. This is not an excuse to skip days. It's an excuse to continue even if you miss one by accident. It's better to keep going.

Does the book seem worth reading? If you can't muster the willpower to check it out, just try the technique on one mini habit for a week. Let me know how it goes.

View more: Next