All of humpolec's Comments + Replies

How do you even make a quantum coin with 1/googolplex chance?

2lavalamp
I don't know, but testing it is likely to be even harder...
2A1987dM
int main(void) { return 0; } ought to be a close-enough approximation for all practical purposes. :-)

What about your past self? If Night Guy can predict what Morning Guy will do, Morning Guy is effectively threatening his past self.

But... but... Light actually won, didn't he? At least in the short run - he managed to defeat L. I was always under the impression that some of these "mistakes" were committed by Light deliberately in order to lure L.

0gwern
You think Light won? Gosh, you need to read my other essay then, Death Note Ending and especially the final section, http://www.gwern.net/Death%20Note%20Ending#who-won

Is there an analogous experiment for Tegmark's multiverse?

You set up an experiment so that you survive only if some outcome, anticipated by your highly improbable theory of physics, is true.

Then you wake up in a world which is with high probability governed by your theory.

2DanielVarga
The analogous experiment for Tegmark's multiverse is called Permutation City.

If I understand correctly, under MW you anticipate the experience of surviving with probability 1, and under C with probability 0.5. I don't think that's justified.

In both cases the probability should be either conditional on "being there to experience anything" (and equal 1), OR unconditional (equal the "external" probability of survival, 0.5). This is something in between. You take the external probability in C, but condition on the surviving branches in MW.

To go with the TV series analogy proposed by Eliezer, maybe it could be an end of Season 1?

6Raemon
Yeah, that's exactly how I'd think about it. I actually think that usually, books should be turned into TV shows, not movies. A thousand pages of book translates into approximately a thousand minutes, so making a movie requires you to gut the book down to the equivalent of 200-300 pages, whereas making a tv series would allow you to actually flesh things out further, giving the director time to actually do something interesting with the material. I firmly believe Harry Potter should have been a TV show, not a movie. At least from an artistic, if not economic standpoint.

It adds a "friend" CSS class to your friend's username everywhere, so you can add an user style or some other hack to highlight it. There is probably a reason LessWrong doesn't do it by default, though.

I have no familiarity with Reddit/Lesswrong codebase, but isn't this (r2/r2/models/subreddit.py) the only relevant place?

elif self == Subreddit._by_name(g.default_sr) and user.safe_karma >= g.karma_to_post:

So it's a matter of changing that g.karma_to_post (which apparently is a global configuration variable) into a subreddit's option (like the ones defines on top of the file).

(And, of course, applying that change to the database, which I have no idea about, but this also shouldn't be hard...)

ETA: Or, if I understand the code correctly, one could just... (read more)

Oh, right. Somehow I was expecting it to be 40 and 0.4. Now it makes sense.

Something is wrong with the numbers here:

The probability that a randomly chosen man surived given that they were given treatment A is 40/100 = 0.2

0bentarm
thanks for pointing this out. Fixed.
0Vaniver
Check the chart- the 40 is a typo. It should be 20/100=0.2

There are some theories about continuation of subjective experience "after" objective death - quantum immortality, or extension of quantum immortality to Tegmark's multiverse (see this Moravec's essay). I'm not sure if taking them seriously is a good idea, though.

I imagine the "stress table" is just a threshold value, and dice roll result is unknown. This way, stress is weak evidence for lying.

I considered the existence of Santa a definitive proof that the paranormal/magic exists and not everything in the world is in the domain of science (and was slightly puzzled that the adults don't see it that way).

No conspiracies, but for a long time I've been very prone to wishful thinking. I'm not really sure if believing in Santa actually influenced that. I don't remember finding out the truth as a big revelation, though - no influence on my worldview or on trust for my parents.

(I've been raised without religion.)

I could also imagine that there are no practically feasible approaches to AGI promising approaches to AGI

?

1multifoliaterose
Fixed, thanks.
2jaimeastorga2000
Great. From this day forth until the day I die, I am going to imagine that every confederate ever is wearing a trollface.

Is there a link to an online explanation of this? When are the consequences of breaking an oath worse than a destroyed world? What did "world" mean when he said it? Humans? Earth? Humans on Earth? Energy in the Multiverse?

Prices or Bindings

Suppose someone comes to a rationalist Confessor and says: "You know, tomorrow I'm planning to wipe out the human species using this neat biotech concoction I cooked up in my lab." What then? Should you break the seal of the confessional to save humanity?

It appears obvious to me that the issue

... (read more)
0PhilGoetz
Thanks! I had read that, but had forgotten about it. Perhaps EY's position makes more sense within timeless decision theory? Since it seems to be based on an absolute requirement for integrity of pre-commitment. On the other hand, he did not express disapproval of sinking the ship to stop the German nuclear bomb. What if Haukelid had had to promise not to harm the ship, in order to get access to it?

So you're saying that the knowledge "I survive X with probability 1" can in no way be translated into objective rule without losing some information?

I assume the rules speak about subjective experience, not about "some Everett branch existing" (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)

Isn't the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 <=> P(survives(joe, X) | joe's experience continues = 1)

Sorry, now I have no idea what we're talking about. If your experiment involves killing yourself after seeing the wrong string, this is close to the standard quantum suicide.

If not, I would have to see the probabilities to understand. My analysis is like this: P(I observe string S | MWI) = P(I observe string S | Copenhagen) = 2^-30, regardless of whether the string S is specified beforehand or not. MWI doesn't mean that my next Everett branch must be S because I say so.

Either you condition the observation (of surviving 1000 attempts) on the observer existing, and you have 1 in both cases, or you don't condition it on the observer and you have p^-1000 in both cases. You can't have it both ways.

What if Tegmark's multiverse is true? All the equivalent formulations of reality would "exist" as mathematical structures, and if there's nothing to differentiate between them, it seems that all we can do is point to appropriate equivalence class in which "we" exist.

However, the unreachable tortured man scenario suggests that it may be useful to split that class anyway. I don't know much about Solomonoff prior - does it make sense now to build a probability distribution over the equivalence class and say what is the probability mass of its part that contains the man?

The reason why this doesn't work (for coins) is that (when MWI is true) A="my observation is heads" implies B="some Y observes heads", but not the other way around. So P(B|A)=1, but P(A|B) = p, and after plugging that into the Bayes formula we have P(MWI|A) = P(Copenhagen|A).

Can you translate that to the quantum suicide case?

If you observe 30 quantum heads in a row you have strong evidence in favor of MWI.

But then if I observed any string of 30 outcomes I would have strong evidence for MWI (if the coin is fair, "p" for any specific string would be 2^-30).

0Jack
You have to specify a particular string to look for before you do the experiment.
0humpolec
The reason why this doesn't work (for coins) is that (when MWI is true) A="my observation is heads" implies B="some Y observes heads", but not the other way around. So P(B|A)=1, but P(A|B) = p, and after plugging that into the Bayes formula we have P(MWI|A) = P(Copenhagen|A). Can you translate that to the quantum suicide case?

First, I'm gonna clarify some terms to make this more precise. Let Y be a person psychologically continuous with your present self. P(there is some Y that observes surviving a suicide attempt|Quantum immortality) = 1. Note MWI != QI. But QI entails MWI. P(there is some Y that observes surviving a suicide attempt| ~QI) = p.

It follows from this that P(~(there is some Y that observes surviving a suicide attempt)|~QI) = 1-p.

I don't see a confusion of levels (whatever that means).

I still see a problem here. Substitute quantum suicide -> quantum coinflip, ... (read more)

-1Jack
I think that works actually. If you observe 30 quantum heads in a row you have strong evidence in favor of MWI. The quantum suicide thing is just a way of increasing the proportion of future you's that have this information.

The probability that there exists an Everett branch in which I continue making that observation is 1. I'm not sure if jumping straight to subjective experience from that is justified:

If P(I survive|MWI) = 1, and P(I survive|Copenhagen) = p, then what is the rest of that probability mass in Copenhagen interpretation? Why is P(~(I survive)|Copenhagen) = 1-p and what does it really describe? It seems to me that calling it "I don't make any observation" is jumping from subjective experiences back to objective. This looks like a confusion of levels.

ET... (read more)

0Jack
First, I'm gonna clarify some terms to make this more precise. Let Y be a person psychologically continuous with your present self. P(there is some Y that observes surviving a suicide attempt|Quantum immortality) = 1. Note MWI != QI. But QI entails MWI. P(there is some Y that observes surviving a suicide attempt| ~QI) = p. It follows from this that P(~(there is some Y that observes surviving a suicide attempt)|~QI) = 1-p. I don't see a confusion of levels (whatever that means). I don't know if this is the point you meant to make but the existence of these other hypotheses that could imply anthropic immortality definitely does get in the way of providing evidence in favor of Many Worlds through suicide. Surviving increases the probability of all of those hypotheses (to different extents but not really enough to distinguish them).
3red75
It even depends on philosophy. Specifically on whether following equality holds. I survive = There (not necessarily in our universe) exists someone who remembers everything I remember now plus failed suicide I'm going to conduct now. or I survive = There exists someone who don't remember everything I remember now, but he acts as I would acted if I remember what he remembers. (I'm not sure whether I correctly expressed subjunctive mood)

Flip a quantum coin.

The observation that you survived 1000 good suicide attempts is much more likely under MWI than under Copenhagen.

Isn't that like saying "Under MWI, the observation that the coin came up heads, and the observation that it came up tails, both have probability of 1"?

The observation that I survive 1000 good suicide attempts has a probability of 1, but only if I condition on my being capable of making any observation at all (i.e. alive). In which case it's the same under Copenhagen.

0PhilGoetz
I have no theories about what you're thinking when you say that.
2Jack
The observation is that you're alive. If the Quantum Immortality hypothesis is true you will continue making that observation after an arbitrary number of good suicide attempts. The probability that you will continue making that observation if Quantum Immortality is false is much smaller than one.

Sure, people in your branch might believe you

The problem I have with that is that from my perspective as an external observer it looks no different than someone flipping a coin (appropriately weighted) a thousand times and getting thousand heads. It's quite improbable, but the fact that someone's life depends on the coin shouldn't make any difference for me - the universe doesn't care.

Of course it also doesn't convince me that the coin will fall heads for the 1001-st time.

(That's only if I consider MWI and Copenhagen here. In reality after 1000 coin fli... (read more)

I would say quantum suiciding is not "harnessing its anthropic superpowers for good", it's just conveniently excluding yourself from the branches where your superpowers don't work. So it has no more positive impact on the universe than you dying has.

0Nisan
I think you are correct.

I don't really see what is the problem with Aumann's in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?

0PhilGoetz
If joe tries and fails to commit suicide, joe will have the proposition (in SNActor-like syntax) action(agent(me), act(suicide)) survives(me, suicide) while jack will have the propositions action(agent(joe), act(suicide)) survives(joe, suicide) They both have a rule something like MWI => for every X, act(X) => P(survives(me, X) = 1 but only joe can apply this rule. For jack, the rule doesn't match the data. This means that joe and jack have different partition functions regarding the extensional observation survives(joe, X), which joe represents as survives(me, X). If joe and jack both use an extensional representation, as the theorem would require, then neither joe nor jack can understand quantum immortality.
humpolec-10

That's the problem - it shouldn't really convince him. If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.

It's not very different from surviving thousand classical Russian roulettes in a row.

ETA: If the chance of survival is p, then in both cases P(I survive) = p, P(I survive | I'm there to observe it) = 1. I think you should use the second one in appraising the MWI...

ETA2: Ok maybe not.

1shokwave
I was actually going off the idea that the vast majority - 100% minus pr(survive all suicides) - of worlds would have the subject dead at some point, so all those worlds would not be convinced. Sure, people in your branch might believe you, but in (100 - 9.3x10^-302) percent of the branches, you aren't there to prove that quantum suicide works. This means, I think, that the chance of you existing to prove that quantum suicide proves MWI to the rest of the world, the chance is equal to the chance of you surviving in a nonMWI universe. I was going to say well, if you had a test with a 1% chance of confirming X and a 99% chance of disconfirming X, and you ran it a thousand times and made sure you presented only the confirmations, you would be laughed at to suggest that X is confirmed - but it is MWI that predicts every quantum event comes out every result, so only under MWI could you run the test a thousand times - so that would indeed be pretty convincing evidence that MWI is true. Also: I only have a passing familiarity with Robin's mangled worlds, but at the power of negative three hundred, it feels like a small enough 'world' to get absorbed into the mass of worlds where it works a few times and then they actually do die.
2PhilGoetz
No; I think you're using the Aumann agreement theorem, which can't be used in real life. It has many exceedingly unrealistic assumptions, including that all Bayesians agree completely on all definitions and all category judgements, and all their knowledge about the world (their partition functions) is mutual knowledge. In particular, to deal with the quantum suicide problem, the reasoner has to use an indexical representation, meaning this is knowledge expressed by a proposition containing the term "me", where me is defined as "the agent doing the reasoning". A proposition that contains an indexical can't be mutual knowledge. You can transform it into a different form in someone else's brain that will have the same extensional meaning, but that person will not be able to derive the same conclusions from it, because some of their knowledge is also in indexical form. (There's a more basic problem with the Aumann agreement theorem - when it says, "To say that 1 knows that 2 knows E means that E includes all P2 in N2 that intersect P1," that's an incorrect usage of the word "knows". 1 knows that E includes P1(w), and that E includes P2(w). 1 concludes that E includes P1 union P2, for some P2 that intersects P1. Not for all P2 that intersect P1. In other words, the theorem is mathematically correct, but semantically incorrect; because the things it's talking about aren't the things that the English gloss says it's talking about.)
1Nisan
Indeed, the anthropic principle explains the result of quantum suicide, whether or not you subscribe to the MWI. The real question is whether you ought to commit quantum suicide (and harness its anthropic superpowers for good). It's a question of morality.
0humpolec
Related (somewhat): The Hero With A Thousand Chances.

Quantum immortality is not observable. You surviving a quantum suicide is not evidence for MWI - no more than it is for external observers.

2Risto_Saarelma
What about me surviving a thousand quantum suicides (with neglible odds of survival) in a row?
1Thomas
If it's not observable, what difference than does it make?

600 or so interlinked documents

I was thinking more of a single, 600-chapter document.

(Actually this is why I think Sequences are best read on a computer, with multiple tabs open, like TVTropes or Wikipedia - not on an e-reader. I wonder how Eliezer's book will turn out...)

PDFs are pretty much write-only, and in my experience (with Adobe Acrobat-based devices) reflow never works very well. As long as you use a sane text-based ebook format, Calibre can handle conversion to other formats.

So I recommend converting into - if not EPUB, then maybe just a clean HTML (with all the links retained - readers that support HTML should have no problems with links between file sections).

2Vladimir_Golovin
Yes, some readers (e.g. Pocketbooks) can handle HTML, but even the latest Sony readers cannot. Kindle does have HTML support "via conversion" but I don't know if it can correctly convert 600 or so interlinked documents.

Your "strong/weak scientific" distinction sounds like it's more about determinism than reductionism.

According to your definitions, I'm a "strong ontological reductionist", and "weak scientific reductionist" because I have no problem with quantum mechanics and MWI being true.

Since there is no handy toll to create polls on LW

I often see polls in comments - "upvote this comment if you choose A", "upvote this if you choose B", "downvote this for karma balance". Asking for replies probably gives you less answers but more accuracy.

Isn't some form of Twin Prisoner's Dilemma here? Not in the payoffs, but in the fact you can assume your decision (to vote or not) is correlated to some degree with others' decision (which it should be if you, and some of them, make that decision rationally).

1nshepperd
To my mind this is the most compelling reason to vote. If you're rational and you want more rational people to vote, then you should vote, because then they will too (assuming a reasonable number of rational people have similar relevant information to your own).
2steven0461
To make things worse, if there's a multiverse, if you're correlated with some people in the same universe as you then you're also correlated with a humongous number of people in different universes (far more humongous than the number of actual copies of you).

I was refering to the idea that complex propositions should have lower prior probability.

Of course you don't have to make use of it, you can use any numbers you want, but you can't assign a prior of 0.5 to any proposition without ending up with inconsistency. To take an example that is more detached from reality - there is a natural number N you know nothing about. You can construct whatever prior probability distribution you want for it. However, you can't just assign 0.5 for any possible property of N (for example, P(N10)=0.5).

0Emile
On the other hand it has been argued that the prior of a hypothesis does not depend on its complexity. There can also be problems with using priors based on complexity; for example the predicates "the number, executed as a computer program, will halt" and "the number, executed as a computer program, will not halt" are both quite complex, but are mutually exclusive, so priors of 50% for each seems reasonable. Assigning 0.5 for any possible property of N is reasonable as long as you don't know anything else about those properties - if in addition you know some are mutually exclusive (like in your example), you can update your probabilities in consequence. But in any case, the complexity of the description of the property can't help us choose a prior.

Prior probability is what you can infer from what you know before considering a given piece of data.

If your overall information is I, and new data is D, then P(H|I) is your prior probability and P(H|DI) posterior probability for hypothesis H.

No one says you have to put exactly 0.5 as prior (this would be especially absurd for absurd-sounding hypotheses like "the lady next door is a witch, she did it".)

1Emile
If we distinguish between "previous information" and "new information", than yes. In this case, the OP made no such distinction, so I can only assume his use of "prior" means "previous to all information we know" (I is nothing - an uninformative prior). By the way, I don't really see a problem with starting with a prior of 0.5 for "the lady next door is a witch" (which could be formalized as "the lady next door has powers of action at a distance that break the laws of physics as we know them") - more generally, it's reasonable to have a prior of 50% for "the lady next door is a flebboogy", and then update that probability based on what information you have on flebboogies (for example, if despite intensive search, nobody has been able to prove the existence of a single flebboogy, your probability will fall quite low). However taking a 50% probability of "she did it" (assuming only one person did "it") wouldn't be a good prior, a better one would be a probability of 1/N for each human, where N is the number of humans on earth. Again, this estimate will wildly vary as you take more information into account, going up if the person that did "it" must have been nearby (and not a teenager in Sri Lanka), going down if she couldn't possibly have done "it" without supernatural powers. Anyway, I don't think the OP was really asking about priors, he probably meant "how do we estimate the probability that a given war is just".

"Why are you upside down, soldier?"

I'm actually a MoR fan, and I've found it both entertaining and (at times) enlightening.

But I think a "beginning rationalist"s time is much better spent if they're studying philosophy, critical thinking, probability theory, etc. than on writing fanfiction (even if it would be useful in small doses).

0jasonmcdowell
That sounds true. How does one figure out the best order in which to learn these things? With math, there are some areas which depend strongly on previous studies (arithmetic before algebra). Simple geometry doesn't necessarily depend on either arithmetic or algebra though (does it?). I wish we had a breakdown of various sub-disciplines used for rationality.
humpolec-10

Look at the recently posted reading list. Pick some stuff, study and discuss. If you have a good "fighting spirit" and desire to become stronger, don't waste it on writing fanfiction...

2jasonmcdowell
Fanfiction may not be the most rigorous kind of practice, but it exercises different mental muscles than more formal discussions. Writing fiction let's you exercise your creativity more than a formal discussion would, and it should be a great testbed for creating parables and becoming a better writer. Well-crafted stories are much more accessible to people who are at the beginning of a learning curve. If your goal is to bring them with you, rather than exploring the unknown (alone or in small groups), then fiction is a great tool. I've never written anything myself so I don't have the experience of how writing fiction affects the author, but I've loved reading Harry Potter and the Methods of Rationality. It feels like part of a balanced information diet.

I see here a Newcomb-like situation, but in the reverse direction - the fire department didn't help the guy out to counterfactually make him pay his $75.

To me this distinction is what makes consciousness distinct and special. I think it is a fascinating consequence of a certain pattern of interacting systems. Implying that conscious feelings occur all over the place, perhaps every feedback system is feeling something.

This sounds like the point Pinker makes in How the Mind Works - that apart from the problem of consciousness, concepts like "thinking" and "knowing" and "talking" are actually very simple:

(...) Ryle and other philosophers argued that mentalistic terms such as

... (read more)

Badly formulated question. I think "consciousness" as subjective experience/ability of introspection/etc. is a concept we all intuitively know (from one example, but still...) and more or less agree on. Do you believe in the color red?

What's under discussion is whether that intuitive concept is possible to be mapped to a specific property, and on what level. Assuming that is the question, I believe a mathematical structure (algorithm?) could be meaningfully called conscious or not conscious.

However, I wouldn't be surprised if it could be "d... (read more)

AFAIK some people subvocalize while reading, some don't. Is this preventing you from reading quickly?

(I've heard claims that eliminating subvocalization it is the first step to faster reading, although Wikipedia doesn't agree. I, as far as I can tell, don't subvocalize while reading (especially when reading English text, in which I don't link strongly words to pronunciation), and although I have some problems with concentration, I still read at about 300 WPM. One of my friends claims ve's unable to read faster than speech due to subvocalization).

0mindspillage
I know that I subvocalize when reading at least sometimes, and usually when I am beginning to read a text. I believe that what I think of as the point where I "get really into" a book means the point where I stop subvocalizing. (I can read 2-3000 WPM--huge variance depending on what kind of text it is, and it's usually more pleasant not to push so hard and read slower.) I don't know how to consciously stop subvocalizing, though; it just happens.

I don't know how we could overcome the boundary of subjective first-person experience with natural language here. If it is the case that human differ fundamentally in their perception of outside reality and inside imagination, then we might simply misunderstand each others definition and descriptions of certain concepts and eventually come up with the wrong conclusions.

While it does sound dangerously close to the "is my red like your red" problem, I think there is much that can be done before you leave the issue as hopelessly subjective. Your ... (read more)

I suspect such visualisation is not a binary ability but a spectrum of "realness", a skill you can be better or worse at. I don't identify with your description fully, I wouldn't call what my imagination does "entering the Matrix", but in some ways it's like actual sensory input, just much less intense.

I also observed this spectrum in my dreams - some are more vivid and detailed, some more like the waking level of imagination, and some remain mostly on the conceptual level.

I would very be interested to know if it's possible to improve your imagination's vividness by training.

2[anonymous]
What just came to my mind is what if those people who allegedly have a reduced ability to imagine realness actually have a heightened ability to experience reality? That is, what if what I describe as the ability to simulate what I experience consciously through sensory input while awake and engaging with my environment would be deemed as dull and abstract, not in any way corresponding to the reality you experience? The people who claim to be able to use their mind's eye to resurrect the experienced and fantasize might simply have a very primitive ability to experience real-time sensory input and therefore put their rather abstract imagination on the same level. I don't know how we could overcome the boundary of subjective first-person experience with natural language here. If it is the case that human differ fundamentally in their perception of outside reality and inside imagination, then we might simply misunderstand each others definition and descriptions of certain concepts and eventually come up with the wrong conclusions. Reminds me of the most fascinating post, The Strangest Thing An AI Could Tell You.

"In the 5617525 times this simulation has run, players have won $664073 And by won I mean they have won back $664073 of the $5617525 they spent (11%)."

Either it's buggy or there is some tampering with data going on.

Also, several Redditors claim to have won - maybe the simulator is just poorly programmed.

4jimrandomh
It's an integer overflow - it wraps around at either 2^31, 2^32/100, or 2^32. I wasn't patient enough to refresh the page enough times to figure out which.

Let's make them wear hooded robes and call them Confessors.

I'm not sure if non-interference is really the best thing to precommit to - if we encounter a pre-AI civilization that still has various problems, death etc., maybe what {the AI they would have build} would have liked more is for us to help them (in a way preserving their values).

If a superintelligence discovers a concept of value-preserving help (or something like CEV?) that is likely to be universal, shouldn't it precommit to applying it to all encountered aliens?

0Vladimir_Nesov
See also: Not Taking Over the World.
0Scott Alexander
Voted up. It is a mortal sin to come up with responses to a valid objection I didn't anticipate to make my original theory true anyway, but for the purposes of fiction: ...perhaps the AI doesn't know whether it will be Friendly or not. The only thing the AI knows for certain is that it will further its values for it to exist. A "helpful" alien race that grants us super-technology might prevent us from accidentally creating an Unfriendly AI. Therefore, all Unfriendly AIs would regard pre-committing to helping their creator species as an acausal declaration of war.
humpolec140

For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you.

What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.

Load More