Posts

Sorted by New

Wiki Contributions

Comments

There's actually no need to settle for finite truncations of a decision agent. The unlosing decision function (on lotteries) can be defined in first-order logic, and your proof that there are finite approximations of a decision function is sufficient to use the compactness theorem to produce a full model.

[This comment is no longer endorsed by its author]Reply

I've just made an enrollment deposit at the University of Illinois at Urbana-Champaign, and I'm wondering if any other rationalists are going, and if so, would they be interested in sharing a dorm?

Perhaps instead of immediately giving up and concluding that it's impossible to reason correctly with MWI, it would be better to take the born rule at face value as a predictor of subjective probability.

The AI is a program. Running on a processor. With an instruction set. Reading the instructions from memory. These instructions are its programming. There is no room for acausal magic here. When the goals get modified, they are done so by a computer, running code.

Consider indicating that your post contains spoilers.

Got it. I was previously having difficulty making that belief pay rent.

I've also heard that for soldiers, seeing one more death or injury can be the tipping point into PTSD.

Am I missing something, or does this follow trivially from PTSD being binary and the set of possible body counts being the natural numbers?

I'm a new user with -1 karma who therefore can't vote, so I'll combat censorship bias like this:

Moderate programmer, correct

Yes

Load More