LW's front page freezes, hangs and bugs on Chrome

2 Bongo 19 July 2011 07:38PM

Browser is Chrome 12.0.742.122. It doesn't happen on Firefox. "It" is:

  • sometimes I can't click on links and eventually I get Chrome's "dead tab" notification
  • other times it keeps loading, even though while I wait for it to load I can go to, say, my user page and have it load right away
  • twice I've gotten weird graphical bugs on it. Screenshots: 1 2

To reiterate, it only happens on the front page, the one you get when you go to lesswrong.com. Other pages are fine. Perhaps it's the map?

LW's image problem: "Rationality" is suspicious

-2 Bongo 19 July 2011 06:16PM

Concerning Less Wrong's tagline, consider this plausible reaction of someone looking at LW for the first time:

Cut the crap, nobody cares about rationality in the abstract. Just tell me what view you're trying to push under the guise of presenting it as the only "rational" one.

And here are two real quotes from 2009:

[concerning the ban on SIAI discussion during the first weeks of LW] I think it was so that newcomers wouldn't think that LW are a bunch of fringe technophiles that just want to have their cause associated with rationality.

And in reply:

But that's pretty much what LW is, no? I've long suspected that "rationality," as discussed here, was a bit of a ruse designed to insinuate a (misleading) necessary connection between being rational and supporting transhumanist ideals.

The quoted text speaks for itself really. So therefore I think LW's admins/web designers should seriously consider replacing the rationality tagline with something more savory.

lessannoying.org

7 Bongo 26 May 2011 08:40PM

Extremely Counterfactual Mugging or: the gist of Transparent Newcomb

5 Bongo 09 February 2011 03:20PM

Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked. 

Omega asks you to pay him $100. Do you pay?

This problem is roughly isomorphic to the branch of Transparent Newcomb (version 1, version 2) where box B is empty, but it's simpler.

Here's a diagram:

Punishing future crimes

3 Bongo 28 January 2011 09:00PM

Here's an edited version of a puzzle from the book "Chuck Klosterman four" by Chuck Klosterman.

It is 1933. Somehow you find yourself in a position where you can effortlessly steal Adolf Hitler's wallet. The theft will not effect his rise to power, the nature of WW2, or the Holocaust. There is no important identification in the wallet, but the act will cost Hitler forty dollars and completely ruin his evening. You don't need the money. The odds that you will be caught committing the crime are negligible. Do you do it?

When should you punish someone for a crime they will commit in the future? Discuss.

Omega can be replaced by amnesia

15 Bongo 26 January 2011 12:31PM

Let's play a game. Two times, I will give you an amnesia drug and let you enter a room with two boxes inside. Because of the drug, you won't know whether this is the first time you've entered the room. On the first time, both boxes will be empty. On the second time, box A contains $1000, and Box B contains $1,000,000 iff this is the second time and you took only box B the first time. You're in the room, do take both boxes or only box B?

This is equivalent to Newcomb's Problem in the sense that any strategy does equally well on both, where by "strategy" I mean a mapping from info to (probability distributions over) actions.

I suspect that any problem with Omega can be transformed into an equivalent problem with amnesia instead of Omega.

Does CDT return the winning answer in such transformed problems?

Discuss.

 

Pascal's Gift

7 Bongo 25 December 2010 07:42PM

 If Omega offered to give you 2^n utils with probability 1/n, what n would you choose?

This problem was invented by Armok from #lesswrong. Discuss.

Should LW have a public censorship policy?

16 Bongo 11 December 2010 10:45PM

It might mollify people who disagree with the current implicit policy, and make discussion about the policy easier. Here's one option:

There's a single specific topic that's banned because the moderators consider it a Basilisk. You won't come up with it yourself, don't worry. Posts talking about the topic in too much detail will be deleted. 

One requirement would be that the policy be no more and no less vague than needed for safety.

Discuss.

Does TDT pay in Counterfactual Mugging?

1 Bongo 29 November 2010 09:31PM

On one hand, this old article said TDT doesn't pay. On the other hand, I imagine TDT not paying would be a slam-dunk argument for favoring UDT, which pays, and I haven't seen people make that argument. So I'm confused here. Thanks.

Edit: this wiki page explains all the jargon

Sleeping Beauty as a decision problem (solved)

4 Bongo 10 October 2010 03:15AM

EDIT: User:Misha solved it


 

First, here's the Sleeping Beauty problem, from Wikipedia:

The paradox imagines that Sleeping Beauty volunteers to undergo the following experiment. On Sunday she is given a drug that sends her to sleep. A fair coin is then tossed just once in the course of the experiment to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. The sleeping drug induces a mild amnesia, so that she cannot remember any previous awakenings during the course of the experiment (if any). During the experiment, she has no access to anything that would give a clue as to the day of the week. However, she knows all the details of the experiment.


Each interview consists of one question, "What is your credence now for the proposition that our coin landed heads?"


I was looking at AlephNeil's old post about UDT and encountered this diagram depicting the Sleeping Beauty problem as a decision problem.



This diagram is underspecified, though. There are no specific payoffs in the boxes and it's not obvious what actions the arrows mean. So I tried to figure out some ways to transform the Sleeping Beauty problem into a concrete decision problem. I also made edited versions of AlephNeil's diagram for versions 1 and 2.


The gamemaster puts Sleeping Beauty to sleep on Sunday. He uses a sleeping drug that causes mild amnesia such that upon waking she won't be able to remember any previous awakenings that may have taken place during the course of the game. The gamemaster flips a coin. If heads, he wakes her up on monday only. If tails, he wakes her up on monday and tuesday.

Version 1

Upon each awakening, the gamemaster asks Sleeping Beauty to guess which way the coin landed. For each correct guess, she's awarded $1000 at the end of the game. diagram

Version 2

Upon each awakening, the gamemaster asks Sleeping Beauty to guess which way the coin landed. If she all of her guesses are correct, she's awarded $1000 at the end of the game. diagram

Version 3

Upon each awakening, the gamemaster asks Sleeping Beauty for her credence as to whether the coin landed heads. For each awakening, if the coin landed x, and she declares a credence of p that it landed x, she's awarded p*$1000 at the end of the game.

Version 4

Upon each awakening, the gamemaster asks Sleeping Beauty for her credence as to whether the coin landed heads. At the end of the game, her answers are averaged to a single probability p, and she's awarded p*$1000.


What's interesting is that while the suggested answers for the classic Sleeping Beauty problem are (1/2) and (1/3), for neither version 1 nor 2 is the correct answer to guess heads every second or third time, and for neither version 3 nor 4 is the correct answer to declare a credence of (1/2) or (1/3). The correct answers are (correct me if I'm wrong, I got these by looking at AlephNeil-style UDT diagrams and doing back-of-the-envelope calculations):

  • Version 1: Always guess tails. Expected payoff $1000
  • Version 2: Always guess heads, or always guess tails. Expected payoff $500.
  • Version 3: Answer with a 0% credence of heads. Expected payoff $1000.
  • Version 4: All answers seem to have an expected payoff of $500.

Is there any way to transform Sleeping Beauty into a decision problem such that the correct answer in some sense is either (1/2) or (1/3)?

Is there a general procedure for transforming problems about credence into decision problems?

View more: Next