Um... Where's da Less Wrong IRC at?
(Edit: I hope I'm not getting upvoted for my goofing around over there last night!)
If zero and one aren't probabilities, how does Bayesian conditioning work? My understanding is that a Bayesian has to be certain of the truth of whatever proposition that she conditions on when updating.
Zero and one are probabilities. The apparent opposite claim is a hyperbole intended to communicate something else, but people on LessWrong persistently make the mistake of taking it literally. For examples of 0 and 1 appearing unavoidably in the theory of probability, P(A|A) =1 and P(A|-A)=0. If someone disputes either of these formulae, the onus is on them to rebuild probability theory in a way that avoids them. As far as I know, no-one has even attempted this.
But P(A|B) = P(A&B)/P(B) for any positive value of P(B). You can condition on evidence all day without ever needing to assert a certainty about anything. Your conclusions will all be hypothetical, of the form "if this is the prior over A and this B is the evidence, this is the posterior over A". If the evidence is uncertain, this can be incorporated into the calculation, giving conclusions of the form "given this prior over A and this probability distribution over possible evidence B, this is the posterior over A."
If you are uncertain even of the probability distribution over B, then a hard-core Bayesian will say that that uncertainty is modelled by a distribution over distributions of B, which can be...
Yeah, but if your observation does not have a probability of 1 then Bayesian conditionalization is the wrong update rule. I take it this was Alex's point. If you updated on a 0.7 probability observation using Bayesian conditionalization, you would be vulnerable to a Dutch book. The correct update rule in this circumstance is Jeffrey conditionalization. If P1 is your distribution prior to the observation and P2 is the distribution after the observation, the update rule for a hypothesis H given evidence E is:
P2(H) = P1(H | E) P2(E) + P1(H | ~E) P2(~E)
If P2(E) is sufficiently close to 1, the contribution of the second term in the sum is negligible and Bayesian conditionalization is a fine approximation.
Doesn't the paper cited here on acausal romance imply that gains from acausal trade are incoherent?
The fact that I can imagine someone who can imagine exactly me doesn't seem like it implies that I can make material gains by acting in reference to that inaccessible other.
What am I misunderstanding?
People here seem confident that there exists a decision theory immune to blackmail. I see a large amount of discussion of how to make an AI immune to blackmail, but I've never seen it established (or even argued for) that doing so is possible. I think I missed something vital to these discussions somewhere. Could someone point me to it, or explain here?
I'm not aware of a satisfactory treatment of blackmail (in the context of reflective decision theory). The main problem appears to be that it's not clear what "blackmail" is, exactly, how to formally distinguish blackmail from trade.
Yeah, it is if you completely ignore the unique and defining feature of all Pascal's mugging, the conditionality of the reward on your assessed probability... ಠ_ಠ
I kind of wish talk about newcomb's problem was presented in terms of source code and AI rather than the more common presentation, since I think it's much more obvious what is being aimed at when you think about it this way. Is there a reason people prefer the original version?
If I'm a moral anti-realist, do necessarily I believe that provably Friendly AI is impossible? When defining friendly, consider Archimedes' Chronophone, which suggests that friendly AI would (should?) be friendly to just about any human who ever lived.
moral anti-realism - there are no (or insufficient) moral facts to resolve all moral disputes an agent faces.
Is there a better way to read Less Wrong?
I know I can put the sequences on my kindle, but I would like to find a way to browse Discussion and Main in a more useable interface (or at least something that I can customize). I really like the threading organization of newsgroups, and I read all of my .rss feeds and mail through Gnus in emacs. I sometimes use the Less Wrong .rss feed in Gnus, but this doesn't allow me to read the comments. Any suggestions?
Also, if any other emacs users are interested, I would love to make a lesswrong-mode package. I'm not a very good lisp hacker, but I think it would be a fun project.
Could someone please explain to me exactly, precisely, what a utility function is? I have seen it called a perfectly well-defined mathematical object as well as not-vague, but as far as I can tell, no one has ever explained what one is, ever.
The words "positive affine transformation" have been used, but they fly over my head. So the For Dummies version, please.
So, in the spirit of stupid (but nagging) questions:
The sequences present a convincing case (to me at least) that MWI is the right view of things, and that it is the best conclusion of our understanding of physics. Yet I don't believe it, because it seems to be in direct conflict with the fact of ethics: if all I can do is push the badness out of my path, and into some other path, then I can't see how doing good things matters. I can't change the fundamental amount of goodness, I can just push it around. Yet it matters that I'm good and not bad.
The 'keep y...
because it seems to be in direct conflict with the fact of ethics
Actual answers aside, as a rationalist, this phrase should cause you to panic.
What do you mean by in conflict? Believing one says nothing about the other. You're not "pushing" anything around. If you act good in one set of universes, that is a set of universes made better by your actions. If you act bad in another, the same thing. Acting good does not cause other universes to become bad.
People making decisions are not quantum events. When a photon could either end up in a detector or not, there are branches where it does and branches where it doesn't. But when you decide whether or not to do something good, this decision is being carried out by neurons, which are big enough that quantum events do not influence them much. This means that if you decide to do something good, you probably also decided to do the same good thing in the overwhelming majority of Everette branches that diverge from when you started considering the decision.
The fact that I can reliably multiply numbers shows that at least some of my decisions are deterministic.
To the extent that I make ethical decisions based on some partially deterministic reasoning process, my ethical decisions are not chaotic.
If, due to chaos, I have a probability p of slapping my friends instead of hugging them, then Laplace's law of succession tells me that p is less than 1%.
The sequences present a convincing case (to me at least) that MWI is the right view of things, and that it is the best conclusion of our understanding of physics.
Just a caution, here. The sequences only really talk about non-relativistic quantum mechanics (NRQM), and I agree that MWI is the best interpretation of this theory. However, NRQM is false, so it doesn't follow that MWI is the "right view of things" in the general sense. Quantum field theory (QFT) is closer to the truth, but there are a number of barriers to a straightforward importation of MWI into the language of QFT. I'm reasonably confident that an MWI-like interpretation of QFT can be constructed, but it does not exist in any rigorous form as of yet (as far as I am aware, at least). You should be aware of this before committing yourself to the claim that MWI is an accurate description of the world, rather than just the best way of conceptualizing the world as described by NRQM.
This article (PDF) gives a nice (and fairly accessible) summary of some of the issues involved in extending MWI to QFT. See sections 4 and 8 in particular. Their focus in the paper is wavefunction realism, but given that MWI (at least the version advocated in the Sequences) is committed to wavefunction realism, their arguments apply. They offer a suggestion of the kind of theory that they think can replace MWI in the relativistic context, but the view is insufficiently developed (at least in that paper) for me to fully evaluate it.
A quick summary of the issues raised in the paper:
In NRQM, the wave function lives in configuration space, but there is no well-defined particle configuration space in QFT since particle number is not conserved and particles are emergent entities without precisely defined physical properties.
A move to field configuration space is unsatisfactory because quantum field theories admit of equivalent description using many different choices of field observable. Unlike NRQM, where there are solid dynamical reasons for choosing the position basis as fundamental, there seems to be no natural or dynamically preferred choice in QFT, so a choice of a particular f
Does anyone else find weapons fascinating? Swords, guns, maces and axes?
I really want a set of fully functional weapons, as objects of art and power. Costly, though.
How much do other people spend on stuff they hang on the wall and occasionally take down to admire?
I've read the majority of the Less Wrong articles on metaethics, and I'm still very very confused. Is this normal or have I missed something important? Is there any sort of consensus on metaethics beyond the ruling out of the very obviously wrong?
Isn't it almost certain that super-optimizing AI will result in unintended consequences? I think it's almost certain that super-optimizing AI will have to deal with their own unintended consequences. Isn't the expectation of encountering intelligence so advanced, that it's perfect and infallible essentially the expectation of encountering God?
Isn't the expectation of encountering intelligence so advanced, that it's perfect and infallible essentially the expectation of encountering God?
Which god? If by "God" you mean "something essentially perfect and infallible," then yes. If by "God" you mean "that entity that killed a bunch of Egyptian kids" or "that entity that's responsible for lightning" or "that guy that annoyed the Roman empire 2 millennia ago," then no.
Also, essentially infallible to us isn't necessarily essentially infallible to it (though I suspect that any attempt at AGI will have enough hacks and shortcuts that we can see faults too).
Meta: One problem with this thread is that it immediately frames all questions as "stupid". I'm not sure questions should be approached from the perspective of "This point must be wrong since the Sequences are right. How is this point wrong?" Some of the questions might be insightful. Can we take "stupid" out of it?
I think taking the stupid out would make it worse. Making it a stupid questions thread makes it a safe space to ask questions that FEEL stupid to the asker.The point of this thread isn't to enable important critiques of the sequences, it's to make it easier to ask questions when they feel like everyone else already acts like they know the answer or something. There can be other venues for actual critiques or serious questions about how accurate the sequences are.
From the last thread:
Meta: