Comment author: humpolec 09 January 2011 03:16:14PM 0 points [-]

There are some theories about continuation of subjective experience "after" objective death - quantum immortality, or extension of quantum immortality to Tegmark's multiverse (see this Moravec's essay). I'm not sure if taking them seriously is a good idea, though.

Comment author: Tesseract 08 January 2011 06:40:04PM 4 points [-]

I'm pretty sure the point of the lie detector that it conveys essentially no information. Real lie detectors are notoriously unreliable.

I thought it was a nice touch.

In response to comment by Tesseract on Rationalist Clue
Comment author: humpolec 08 January 2011 06:48:47PM 2 points [-]

I imagine the "stress table" is just a threshold value, and dice roll result is unknown. This way, stress is weak evidence for lying.

6502 simulated - mind uploading for microprocessors

36 humpolec 08 January 2011 06:03PM

Possibly offtopic, but a neat project with interesting analogy to mind uploading:

Some people managed to scan, using a microscope, a MOS 6502 microprocessor (Apple II, C64, NES), and simulate it at the level of single transistors. This neatly circumvented all the problems with inaccurate emulation, unknown opcodes etc., and even allowed them to run actual Atari 2600 games without having to know anything about 6502's inner workings.

Presentation slides about the project are here.

Comment author: humpolec 19 December 2010 09:28:35PM 4 points [-]

I considered the existence of Santa a definitive proof that the paranormal/magic exists and not everything in the world is in the domain of science (and was slightly puzzled that the adults don't see it that way).

No conspiracies, but for a long time I've been very prone to wishful thinking. I'm not really sure if believing in Santa actually influenced that. I don't remember finding out the truth as a big revelation, though - no influence on my worldview or on trust for my parents.

(I've been raised without religion.)

Comment author: humpolec 14 December 2010 10:10:12AM *  0 points [-]

I could also imagine that there are no practically feasible approaches to AGI promising approaches to AGI

?

Comment author: Emile 13 December 2010 09:04:07PM *  11 points [-]

Illustration contests

Seriously, the sequences are under-illustrated. The guy running You Are Not So Smart ran a contest for illustrating some of his posts.

Surely we can do something like that! (without it degenerating into lolcats, though come to think of it a lolcat/trollface/rageface explanation of some deep ideas would be pretty awesome)

Comment author: humpolec 13 December 2010 09:15:10PM *  11 points [-]
Comment author: PhilGoetz 14 November 2010 05:28:55PM *  4 points [-]

I would have made it with Eliezer, who has a consequentialist morality but, on account of the consequences, has said he would not break an oath even for the sake of saving the world.

Is there a link to an online explanation of this? When are the consequences of breaking an oath worse than a destroyed world? What did "world" mean when he said it? Humans? Earth? Humans on Earth? Energy in the Multiverse?

But I only trust Alicorn and Eliezer because I've discussed morality with both of them in a situation where they had no incentive to lie

Given that you are still alive, posting, and connected within SIAI and LessWrong; and that they both probably expected that at the time; I don't think any such situation is possible. I think you're giving them shockingly little credit as rationalists, especially if you've read either Luminosity or Harry Potter and the Methods of Rationality.

Eliezer has proven, empirically, that his reputation is worth at least an amount on the order of the funding that SIAI has received to date - let's say half of it. Every time he opens his mouth, he thus has an incentive of at least several hundred thousand dollars not to damage that reputation. If he did already state that he would not break an oath to save the world, and you offered him a million dollars to go on record as saying that he would lie or break an oath for pragmatic reasons, I'd be surprised if he took you up on it. (But I'd be up for it, if you have the money...)

I don't believe that he's lying. And I don't believe that he's telling the truth. I believe Eliezer may be operating on a level of rationality where we actually need to regard him as a rational agent. And he's playing a game that he expects to go on for many iterations. That implies that, short of fMRI, knowing his intent is impossible.

It might be more rational for him to pretend to be less rational. Therefore, he possibly already has.

Comment author: humpolec 14 November 2010 07:24:01PM 0 points [-]

Is there a link to an online explanation of this? When are the consequences of breaking an oath worse than a destroyed world? What did "world" mean when he said it? Humans? Earth? Humans on Earth? Energy in the Multiverse?

Prices or Bindings

Suppose someone comes to a rationalist Confessor and says: "You know, tomorrow I'm planning to wipe out the human species using this neat biotech concoction I cooked up in my lab." What then? Should you break the seal of the confessional to save humanity?

It appears obvious to me that the issues here are just those of the one-shot Prisoner's Dilemma, and I do not consider it obvious that you should defect on the one-shot PD if the other player cooperates in advance on the expectation that you will cooperate as well.

Comment author: PhilGoetz 12 November 2010 04:22:34AM 0 points [-]

If joe tries and fails to commit suicide, joe will have the proposition (in SNActor-like syntax)

action(agent(me), act(suicide)) survives(me, suicide)

while jack will have the propositions

action(agent(joe), act(suicide)) survives(joe, suicide)

They both have a rule something like

MWI => for every X, act(X) => P(survives(me, X) = 1

but only joe can apply this rule. For jack, the rule doesn't match the data. This means that joe and jack have different partition functions regarding the extensional observation survives(joe, X), which joe represents as survives(me, X).

If joe and jack both use an extensional representation, as the theorem would require, then neither joe nor jack can understand quantum immortality.

Comment author: humpolec 12 November 2010 06:55:14AM *  0 points [-]

So you're saying that the knowledge "I survive X with probability 1" can in no way be translated into objective rule without losing some information?

I assume the rules speak about subjective experience, not about "some Everett branch existing" (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)

Isn't the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 <=> P(survives(joe, X) | joe's experience continues = 1)

Comment author: Jack 12 November 2010 01:30:32AM 0 points [-]

You have to specify a particular string to look for before you do the experiment.

Comment author: humpolec 12 November 2010 06:45:15AM 0 points [-]

Sorry, now I have no idea what we're talking about. If your experiment involves killing yourself after seeing the wrong string, this is close to the standard quantum suicide.

If not, I would have to see the probabilities to understand. My analysis is like this: P(I observe string S | MWI) = P(I observe string S | Copenhagen) = 2^-30, regardless of whether the string S is specified beforehand or not. MWI doesn't mean that my next Everett branch must be S because I say so.

Comment author: PhilGoetz 12 November 2010 03:54:56AM 0 points [-]

Isn't that like saying "Under MWI, the observation that the coin came up heads, and the observation that it came up tails, both have probability of 1"?

I have no theories about what you're thinking when you say that.

Comment author: humpolec 12 November 2010 06:32:19AM 0 points [-]

Either you condition the observation (of surviving 1000 attempts) on the observer existing, and you have 1 in both cases, or you don't condition it on the observer and you have p^-1000 in both cases. You can't have it both ways.

View more: Prev | Next