Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

CODT (Cop Out Decision Theory) : In which you precommit to every beneficial precommitment.

I thought that debate was about free will.

This isn't obvious. In particular, note that your "obvious example" violates the basic assumption all these attempts at a decision theory are using, that the payoff depends only on your choice and not how you arrived at it.

Omega simulates you in a variety of scenarios. If you consistently make rational decisions he tortures you.

Since that summer in Colorado, Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens have all produced bestselling and highly controversial books—and I have read them all.

The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God. Our best moral guidance comes from what God is revealing today through evidence, not from tradition or authority or old mythic stories.

The first sentence warns agains taking the Bible literally, but the next sentence insinuates that we don't even need it...

He's also written a book called "Thank God for Evolution," in which he sprays God all over science to make it more palatable to christians.

I dedicate this book to the glory of God. Not any "God" me may think about , speak about , believe in , or deny , but the one true God we all know and experience.

If he really is trying to deconvert people, I suspect it won't work. They won't take the final step from his pleasant , featureless god to no god, because the featureless one gives them a warm glow without any intellectual conflict.

How much more information is in the ontogenic environment, then?

Off the top of my head:

  1. The laws of physics

  2. 9 months in the womb

  3. The rest of your organs. (maybe)

  4. Your entire childhood...

These are barriers developing Kurzweil's simulator in the first place, NOT to implementing it in as few lines of code as possible. A brain simulating machine might easily fit in a million lines of code, and it could be written by 2020 if the singularity happens first, but not by involving actual proteins . That's idiotic.

One of the facts about 'hard' AI, as is required for profitable NLP, is that the coders who developed it don't even understand completely how it works. If they did, it would just be a regular program.

TLDR: this definitely is emergent behavior - it is information passing between black-box algorithms with motivations that even the original programmers cannot make definitive statements about.

Yuck.

The first two questions aren't about decisions.

"I live in a perfectly simulated matrix"?

This question is meaningless. It's equivalent to "There is a God, but he's unreachable and he never does anything."

it might take O(2^n) more bits to describe BB(2^n+1) as well, but I wasn't sure so I used BB(2^(n+1)) in my example instead.

You can find it by emulating the Busy Beaver.

Oh.

I feel stupid now.

EDIT: Wouldn't it also break even by predicting the next Busy Beaver number? "All 1's except for BB(1...2^n+1)" is also only slightly less likely. EDIT: I feel more stupid.

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.

Load More