All of Kevin_Dick's Comments + Replies

Sweet! I thought I was the only smart kid that tried to emulate the Thundercats. Personally, I identified most with Panthro. I am not ashamed to admit this. Discipline, teamwork, and fighting evil. Oh, and the gadgets. Yes, the gadgets.

How is this not a surface analogy?

Eliezer, I'm actually a little surprised at that last comment. As a Bayesian, I recognize that reality doesn't care if I feel comfortable with whether or not I "know" an answer. Reality requires me to act on the basis of my current knowledge. If you think AI will go self-improving next year, you should be acting much differently than if you believe it will go self-improving in 2100. The difference isn't as stark at 2025 versus 2075, but it's still there.

What makes your unwillingness to commit even stranger is your advocacy that there's signif... (read more)

Upon first reading, I honestly thought this post was either a joke or a semantic trick (e.g., assuming the scientists were themselves perfect Bayesians which would require some "There are blue-eyed people" reasoning).

Because theories that can make accurate forecasts are a small fraction of theories that can make accurate hindcasts, the Bayesian weight has to be on the first guy.

In my mind, I see this visually as the first guy projecting a surface that contains the first 10 observations into the future and it intersecting with the actual future. ... (read more)

I think you may be attacking a straw man here. When I was taught about the PD almost 20 years ago in an undergraduate class, our professor made exactly the same point. If there are enough iterations (even if you know exactly when the game will end), it can be worth the risk to attempt to establish cooperation via Tit-for-Tat. IIRC, it depends on an infinite recursion of your priors on the other guy's priors on your priors, etc. that the other guy will attempt to establish cooperation. You compare this to the expected losses from a defection in the firs... (read more)

[anonymous]120

I think you may be attacking a straw man here.

It frustrates me immensely to see how many times this claim is made in the comments of Eliezer's posts. At least 75% of the times I read this I've personally encountered someone who made the "straw" claim. In this case, consult the first chapter of Ken Binmore's "Playing for Real".

Doesn't this boil down to being able to "put yourself in another's shoes"? Are mirror neurons what are necessary to carry out moral reasoning?

This kind of solves the pie division problem. If you are capable of putting yourself in the other guy's shoes and still sincerely believing you should get the whole pie, perhaps there is some information about your internal state that you can communicate to the others to convince them?

IS the essence of morality that you should believe in the same division no matter which position you occupy?

Elizer. I've been a Believer for 20 years now, so I'm with you. But it seems like you're losing people a little bit on Bayes v Science. You've probably already thought of this, but it might make sense to take smaller pedagogical steps here to cover the inferential distance.

One candidate step I thought of was to first describe where Bayes can supplement Science. You've already identified choosing which hypotheses to test. But it might help to list them all out. Off the top of my head, there's also obviously what to do in the face of conflicting experi... (read more)

I just had a thought, probably not a good one, about Many Worlds. It seems like there's a parallel here to the discovery of Natural Selection and understanding of Evolution.

Darwin had the key insight about how selection pressure could lead to changes in organisms over time. But it's taken us over 100 years to get a good handle on speciation and figure out the detailed mechanisms of selecting for genetic fitness. One could argue that we still have a long way to go.

Similarly, it seems like we've had this insight that QM leads to Many Worlds due to decoher... (read more)