Comment author: Kawoomba 04 April 2014 10:06:13PM 0 points [-]
Comment author: Armok_GoB 08 April 2014 12:52:19AM 0 points [-]

warning: NSFW

Comment author: Nisan 05 April 2014 03:31:12AM 1 point [-]

Oh, interesting. So just as one should act as if one is Jesus if one seems to be Jesus, then one should act as if one cares about world-histories in proportion to their L2 measure if one seems to care about world-histories in proportion to their L2 measure and one happens to be in a world-history with relatively high L2 measure. And if probability is degree of caring, then the fact that one's world history obeys the Born rule is evidence that one cares about world-histories in proportion to their L2 measure.

I take it you would prefer option 2 in my original comment, reduce anticipation to UDT, and explain away continuity of experience.

Have I correctly characterized your point of view?

Comment author: Armok_GoB 05 April 2014 06:39:27PM 0 points [-]

Exactly! Much better than I could!

Comment author: Nisan 04 April 2014 03:57:11PM 0 points [-]

Hm, so you're saying that anticipation isn't a primitive, it's just part of one's decision-making process. But isn't there a sense in which I ought to expect the Born rule to hold in ordinary circumstances? Call it a set of preferences that all humans share — we care about futures in proportion to the square of the modulus of their amplitude (in the universal wavefunction? in the successor state to our Everett branch?). Do you have an opinion on exactly how that preference works, and what sorts of decision problems it applies to?

Comment author: Armok_GoB 05 April 2014 02:14:03AM 1 point [-]

Induction. You have uncertainty about the extent to which you care about different universes. If it turns out you don't care about the born rule for one reason or another the universe you observe is an absurdly (as in probably-a-Boltzmann-brain absurd) tiny sliver of the multiverse, but if you do, it's still an absurdly tiny sliver but immensely less so. You should anticipate as if the born rule is true, because if you don't almost only care about world where it is true, then you care almost nothing about the current world, and being wrong in it doesn't matter, relatively to otherwise.

Hmm, I'm terrible at explaining this stuff. But the tl;dr is basically that there's this long complicated reason why you should anticipate and act this way and thus it's true in the "the simple truth" sense, that's mostly tangential to if it's "true" in some specific philosophy paper sense.

Comment author: Nisan 02 April 2014 04:10:34PM 0 points [-]

Hm, so you're saying that if |u> has high probability density in the subspace that contains Bob, then in the near future there must still be high probability density there, or at least nearby. But in fact |u> has very low probability density in Bob's Everett branch. Consider all the accidents of weather and history that led to Bob's birth, not to mention the quantum fluctuations that led to Bob's galaxy being created.

Comment author: Armok_GoB 04 April 2014 02:00:30AM 0 points [-]

You're overextending a hack intuition. "Existence", "measure", "probability density", "what you should anticipate", etc. aren't actually all the exact same thing once you get this technical. Specifically, I suspect you're trying to set the later based on one of the former, without knowing which one since you assume they are identical. I recommend learning UDT and deciding what you want agents with your input history to anticipate, or if that's not feasible just do the math and stop bothering to make the intuition fit.

Comment author: Armok_GoB 22 March 2014 06:33:33PM 1 point [-]

Conversely, any common and overused or commonly misused heuristic can also be used as a fallacy. Absurdity Fallacy, Affect Fallacy, Availability Fallacy. I probably use these far more than the original as-good-heuristic concept.

Comment author: Gunnar_Zarncke 21 March 2014 12:29:18AM 2 points [-]

It is difficult to know which knowledge to preserve to which grade. I wish I could have a printed copy of the 'most important' Wikipedia articles.

At least I bought a Wikipedia DVD in case I'm offline but I'm not sure it will help much to restore civilitation in a real disaster.

Some things should be on paper. Some things should be on nickel like the LongNow foundation does. But which?

Actually I once wrote a proposal in the MetaPedia for that but guess what: I couldn't find it again the last time I looked for it. The whole structure had changed. We have no standardized way to tag digital documents to be more (or less) important than other documents (and such a relative classification which could be automatically resolved to a total order is much better than any fixed absolute scheme - guess why).

Comment author: Armok_GoB 22 March 2014 06:03:18PM 3 points [-]

wouldn't something like microfilm make more sense; not reliant on a special reader (just include normal-sized instructions for making a crude microscope) and still decent storage density. Maybe etch it into aluminum and roll it up in giant rolls.

Comment author: Lumifer 20 March 2014 05:31:54PM 5 points [-]

But a human who learns rationality already has those values, and rationality can help them understand those values better, decompartmentalize, and optimize more efficiently.

So? Let's say I value cleansing the Earth of untermenschen. Rationality can indeed help me achieve my goals and "optimize more efficiently". Once you start associating rationality with sets of values, I don't see how can you associate it with only "nice" values like altruism, but not "bad" ones like genocide.

Comment author: Armok_GoB 20 March 2014 07:15:25PM 0 points [-]

Maybe, but at least they'll be campaigning for mandatory genetic screening for genetic disorders rather than kill people of some arbitrary ethnicity they happened to fixate on.

Comment author: Armok_GoB 15 March 2014 04:42:08PM 1 point [-]

While obviously not rigorous enough for something serious, one obvious hack is to do the "0.5 unless proven" thing, and then have a long list of special case dumb heuristics with different weights that update that without any proofs involved at all. The list of heuristics could be gotten from some unsafe source like the programer or another AI or mechanical turk, and then the weights learned by first guessing and then proving to see if it was right, with heuristics that are to bad kicked out entirely.

Comment author: Armok_GoB 14 March 2014 07:38:44PM *  0 points [-]

We spend a lo of time interacting with physical objects. Interacting with things you don't understand is terrifying and painful. It has nothing to do with being fun nor any practical benefits other than not spending all your time filled with dread and paranoia.

Edit: discovered blatant typical mind fallacy. Leaving it here anyway only with this disclaimer.

Comment author: Armok_GoB 12 March 2014 06:18:35PM 2 points [-]

Xia, in anvil conversation: "What if you have the AIXI as a cartesian lump, and teach it that it's output can only influence a tiny voltage various sensitive sensors can sense, and that if the voltage to it is broken time skips forward until it's reinstated, and gives it a clock tick timeout death prior based on how long the universe has been running rather than how many bits it has outputted? The AI will predict that if it's destroyed the lump wont be found and the voltage nevrreaplied untill the universe spontaneously ceases to exist a few million years later"

View more: Prev | Next