Comment author: Punoxysm 27 April 2014 09:27:18PM *  8 points [-]

I have to say, I seriously don't get the Bayesian vs Frequentist holy wars. It seems to me the ratio of importance to education of its participants is ridiculously low.

Bayesian and frequentist methods are sets of statistical tools, not sacred orders to which you pledge a blood oath. Just understand the usage of each tools, and the fact that virtually any model of something that happens in the real world is going to be misspecified.

Comment author: Oscar_Cunningham 27 April 2014 09:45:16PM 7 points [-]

It's because Bayesian methods really do claim to be more than just a set of tools. They are supposed to be universally applicable.

Comment author: Oscar_Cunningham 27 April 2014 08:46:58PM -1 points [-]

So much for starting open threads on a Monday.

Comment author: Oscar_Cunningham 24 April 2014 05:38:57PM 8 points [-]

There's already an open thread! Metus started it early so that they would begin on Mondays.

Comment author: Oscar_Cunningham 23 April 2014 10:12:28PM *  4 points [-]

Is there any way I can delete my userpage or set up a redirect so that when I click on my name it takes me to my comments page like it used to?

Comment author: Skeptityke 16 April 2014 05:42:03PM 8 points [-]

What are the most effective charities working towards reducing biotech or pandemic x-risk? I see those mentioned here occasionally as the second most important x-risk behind AI risk, but I haven't seen much discussion on the most effective ways to fund their prevention. Have I missed something?

Comment author: Oscar_Cunningham 16 April 2014 06:42:05PM 0 points [-]

Note that Friendly AI (if it works) will defeat all (or at least a lot of) x-risk. So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren't AI risk. If you anticipate an intelligence explosion but aren't worried about UFAI then your favourite charity is probably some non-MIRI AI research lab (Google?).

Comment author: wadavis 13 April 2014 06:55:55PM *  0 points [-]

Thats right, but I want to double check our connotations. Acts feels like faking or intentional signalling, how about Wadavis v1.1 does not defect against kin and kind (other Wadavis versions in this case) so that future kin and kind will cooperate with him. Less a matter of acting and more a matter of those are the rules Wadavis follows while dealing with Wadavis, Home-team bot Cooperates with all other Home-team bots, even if defect has a higher payoff for the tempted version. Schelling fences and such.

This all hinges on Wadavis v1.0 cooperating and having some sort of confidence that all future versions will cooperate. I think this is where is comes together, Wadavis v1.0 can simulate the behavior of future versions. If future versions cooperate, v1.0 cooperates. if future versions defect, v1.0 will defect and not invest in helping v2.0.

Comment author: Oscar_Cunningham 13 April 2014 07:30:07PM 1 point [-]

Yep. I didn't mean "act" as in "perform in a play" but as in "carry out an action".

Comment author: wadavis 13 April 2014 12:45:33AM *  0 points [-]

I'm going to need to go back and brush up on the money pump concept. But for now I'm on boat that that would be mugging my body double as drethelin said.

Wadavis v1.0 cares about all future versions of Wadavis. He accepts the deal and improves the life of the body double Wadavis v2.0. Wadavis v1.1 is the planetside post-copy of Wadavis v1.0, he accepts the second deal and reduces the quality of life of Wadavis v2.0. It is clear payout with no downside.

Wadavis v1.1 is a jerk who denied Wadavis v2.0, who remember includes Wadavis v1.0 in their identity, agency over their own future. Wadavis v1.1 just mugged Wadavis v2.0 for the money Wadavis v1.0 paid for the better life.

Now if Wadavis v1.0 was rational and cared for all future Wadavis versions. Would he cooperate (pay) if he knew Wadavis v1.1 would defect (take the second of option)? No, that would be foolish. So Wadavis v0.0 has precommitted to respect the rights and freedom of(cooperate with) all versions of Wadavis, eg. Not mug them of a luxury bought and paid for.

Make sense?

Comment author: Oscar_Cunningham 13 April 2014 03:16:59PM 1 point [-]

So Wadavis v0.0 has precommitted to respect the rights and freedom of(cooperate with) all versions of Wadavis, eg. Not mug them of a luxury bought and paid for.

Okay, that makes sense. So Wadavis v1.1 doesn't care much about Wadavis v2.0, but he acts like he cares a lot?

Comment author: drethelin 11 April 2014 11:56:12PM 0 points [-]

That's not really a money pump, since you have to spend whatever resources it takes to create a bunch of clones and torture them.

Comment author: Oscar_Cunningham 12 April 2014 12:11:22AM 0 points [-]

The point isn't whether I (the pumper) make a profit, it's whether you (the pumpee) make a loss.

Comment author: wadavis 11 April 2014 03:39:39PM 1 point [-]

Pre copying I would care greatly. Post copying I would mourn my body doubles suffering or celebrate their joy mildly.

From the future copy's and my experience we were once the same, so the present is invested in the well being of both. Post copy we have split and care for each other to the extent that kin and kind care for each other.

For example consider a lottery you have a 50% chance to win, before the draw you are greatly invested in the outcome. after the outcome you barely give a thought to the alternate timeline you that could have won.

Comment author: Oscar_Cunningham 11 April 2014 09:26:29PM 1 point [-]

Pre copying I would care greatly. Post copying I would mourn my body doubles suffering or celebrate their joy mildly.

I think this turns you into a money pump. Pre-split there's some amount of money you will pay to have it be the other person experiencing pain rather than your double. Post-split you'll need less money given back to you to incentivise you to let it be your double rather than the other person.

Comment author: JoshuaZ 09 April 2014 11:18:53AM 0 points [-]

How about I choose a prisoner at random from among all the prisoners in the problem. What is the probability that the prisoner I have chosen has correctly stated the color of the hat on his head?

So what do you mean to choose a prisoner at random when one has infinitely many prisoners?

Comment author: Oscar_Cunningham 09 April 2014 01:00:02PM 0 points [-]

Whatever mwengler's answer is, your answer is going to have to be "in that case the set you asked about isn't measurable, and so I can't assign it a probability".

View more: Prev | Next