Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: moshez 04 January 2013 09:00:54PM 1 point [-]

As far as complexity-of-logic-theories-for-reason-of-believing-in-them, that should be proportional to the minimal Turing machine that would check if something is an axiom or not. (Of course, in the case of a finite list, approximating it to the total length of the axioms is reasonable, because the Turing machine that does "check if input is equal to following set:" followed by set adds a constant size to the set -- but that approximation breaks down badly for infinite axiom schema).

Comment author: kragensitaker 11 August 2011 08:56:38PM 2 points [-]

Well, but unlike the atom-cooling example, becoming a strict vegetarian doesn't cut off your communication with non-vegetarians.

It does make it more difficult to go to the steakhouse with them, or eat over at their house.

Comment author: moshez 31 December 2012 11:51:57AM 2 points [-]

For eating at people's houses: usually people will have enough side-dishes that if one does not make a big deal of it, one can fill up on non-meat dishes. At worst, there's always bread.

For going to steakhouse -- yes, but at every other place, there's usually a vegetarian option, if one tries hard enough.

It does make a good case for being an unannoying vegetarian...but being a strict-vegetarian is a useful Schelling point.

Comment author: alex_zag_al 16 September 2012 01:44:18AM *  0 points [-]

That definition does not always coincide with what is described in the article; something can be evidence even if P(X|e) = P(X).

Imagine that two cards from a shuffled deck are placed face-down on a table, one on the left and one on the right. Omega has promised to put a monument on the moon iff they are the same color.

Omega looks at the left card, and then the right, and then disappears in a puff of smoke.

What he does when he's out of sight is entangled with the identity of the card on the right. Change the card to one of a different color and, all else being equal, Omega's action changes.

But, if you flip over the card on the right and see that it's red, that doesn't change the degree to which you expect to see the monument when you look through your telescope. P(monument|right card is red) = P(monument) = 25/51

It does change your conditional beliefs, though, such as what the world would be like if the left card turned out to also be red: P(monument|left is red & right is red) > P(monument|left is red)

Comment author: moshez 31 December 2012 11:40:50AM 0 points [-]

Of course e can be evidence even if P(X|e)=P(X) -- it just cannot be evidence for X. It can be evidence for Y if P(Y|e)>P(Y), and this is exactly the case you describe. If Y is "there is a monument and left is red or there is no monument and left is black", then e is (infinite, if Omega is truthful with probability 1) evidence for Y, even though it is 0 evidence for X.

Similarly, you watching your shoelace untied is zero evidence for my shoelaces...

Comment author: bryjnar 26 December 2012 10:54:39PM 0 points [-]

Absolutely, but it's one that happens in a different system. That can be relevant. And I quite agree: that still leaves some things that are unknowable even by supersmart AI. Is that surprising? Were you expecting an AI to be able to know everything (even in principle)?

Comment author: moshez 26 December 2012 11:32:28PM 0 points [-]

No, it is not surprising... I'm just saying that saying that the semantics is impoverished if you only use finite syntactical proof, but not to any degree that can be fixed by just being really really really smart.

Comment author: bryjnar 25 December 2012 02:11:12PM 1 point [-]

Sure. So you're not going to be able to prove (and hence know) some true statements. You might be able to do some meta-reasoning about your logic to figure some of these out, although quite how that's supposed to work without requiring the context of set theory again, I'm not really sure.

Comment author: moshez 26 December 2012 09:08:23PM 2 points [-]

bryjnar: I think the point is that the metalogical analysis that happens in the context of set theory is still a finite syntactical proof. In essense, all mathematics can be reduced to finite syntactical proofs inside of ZFC. Anything that really, truly, requires infinite proof in actual math is unknowable to everyone, supersmart AI included.

Comment author: incariol 26 December 2012 06:06:31PM 2 points [-]

Given these recent logic-related posts, I'm curious how others "visualize" this part of math, e.g. what do you "see" when you try to understand Goedel's incompleteness theorem?

(And don't tell me it's kittens all the way down.)

Things like derivatives or convex functions are really easy in this regard, but when someone starts talking about models, proofs and formal systems, my mental paintbrush starts doing some pretty weird stuff. In addition to ordinary imagery like bubbles of half-imagined objects, there is also something machine-like in the concept of a formal system, for example, like it was imbued with a potential to produce a specific universe of various thingies in a larger multiverse (another mental image)...

Anyway, this is becoming quite hard to describe - and it's not all due to me being a non-native speaker, so... if anyone is prepared to share her mind's roundabouts, that would be really nice, but apart from that - is there a book, by a professional mathematician if possible, where one can find such revelations?

Comment author: moshez 26 December 2012 09:03:41PM 5 points [-]

Here's how I visualize Goedel's incompleteness theorem (I'm not sure how "visual" this is, but bear with me): I imagine the Goedel construction over the axioms of first-order Peano arithmetic. Clearly, in the standard model, the Goedel sentence is true, so we add G to the axioms. Now we construct G' a Goedel sentence in this new set, and add G'' as an axiom. We go on and on, G''', etc. Luckily that construction is computable, so we add G^w as a Goedel sentence in this new set. We continue on and on, until we reach the first uncomputable countable ordinal, at which point we stop, because we have an uncomputable axiom set. Note that Goedel is fine with that -- you can have a complete first-order Peano arithmetic (it would have non-standard models, but it would be complete!) -- as long as you are willing to live with the fact that you cannot know if something is a proof or not with a mere machine (and yes, Virginia, humans are also mere machines).

Comment author: drethelin 24 December 2012 10:27:20PM 1 point [-]

It would upset me if either of those were primary activities of the lw group in the place I was in.

There is a rather enormous difference between things I care whether lwers do and things I care whether lw does. Some lwers somewhere having rituals doesn't bother me, every lw group deciding rituals are a good idea and adopting them would. I don't think this is actually a big risk but I think it's worth pointing out especially since the context is the especially influential NYLW group.

Also, the less strict something is the less I care whether it's a ritualized regular occurrence. I would much rather come to a song night than to a night for the same specific songs each time, and I would basically never go to catan night.

Comment author: moshez 24 December 2012 11:46:37PM 0 points [-]

I'm trying to steelman your arguments as much as I can, but I find myself confused. The best I can do is: "I'm worried that people would find LW communities unwelcoming if they do not go to rituals. Further, I'm worried that rituals are a slippery-slope: once we start having rituals, they might start being the primary activity of LW and make the experience unwelcoming even if non-ritual activities are explicitly open, because it feels more like 'a Church group that occasionally has secular activities. I'm worried that this will divide people into those who properly mark themselves as "LWers" and those who don't, thus starting our entropic decay into a cult."

So far, your objections seem to be to this being the primary activity of the LW group, which -- honestly -- I would join you. But if a regularly meeting LW group also had a Catan night once a week (for Catan enthusiasts, obviously -- if you don't like Catan don't come) and a Filk night once a month (for filk enthusiasts, again), I am not sure this would hasten a descent into a Catan-only or filk-only group. Similarly, if a LW group has a ritual once a year (or even if every LW group has a ritual, and even it's the same ritual), it doesn't seem likely rituals will become the primary thing the group does.

"There is a rather enormous difference between things I care whether lwers do and things I care whether lw does."

I notice I am confused. LessWrong is a web site, and to some extent a community of people, which I tend to refer to as "Less Wrongers". If you mean these words the same as I do, then I do not understand -- "LW does something" means "the community does something" which means "many members do something". I'm not really sure how LW does something is distinguished from LWers doing it...

Comment author: SaidAchmiz 24 December 2012 09:32:06PM *  0 points [-]

I have indeed read it; I've even linked it to other people on this site myself, and taken explicit steps to counteract the effect; see e.g. this post.

I have no problem saying "I agree; you are right and/or this is awesome". This happens to be a topic to which my reaction is otherwise. I think it's especially important to speak up in cases where I disagree and where I think a number of other people also disagree but hesitate to speak.

Comment author: moshez 24 December 2012 10:15:41PM 6 points [-]

Sorry, that's not the context at which I meant it -- I'm sure you're as willing to admit you were wrong as the next rationalist. I mean it in the context of "Barbarians vs. Rationalists" -- if group cohesion is increased by ritual, and group cohesion is useful to the rationality movement, than ritual could be useful. Wanting to dissociate ourselves from the trappings of religion seems like a case of "reversed stupidity" to me...

Comment author: JulianMorrison 11 February 2008 05:44:30AM 0 points [-]

Given that this bug relates to neural structure on an abstract, rather than biological level, I wonder if it's a cognitive universal beyond just humans? Would any pragmatic AGI built out of neurons necessarily have the same bias?

Comment author: moshez 24 December 2012 09:59:54PM 2 points [-]

The same bias to...what? From the inside, the AI might feel "conflicted" or "weirded out" by a yellow, furry, ellipsoid shaped object, but that's not necessarily a bug: maybe this feeling accumulates and eventually results in creating new sub-categories. The AI won't necessarily get into the argument about definitions, because while part of that argument comes from the neural architecture above, the other part comes from the need to win arguments -- and the evolutionary bias for humans to win arguments would not be present in most AI designs.

Comment author: SaidAchmiz 24 December 2012 09:04:30PM *  1 point [-]

I think it was some variant of the Typical Mind Fallacy, albeit one based not only on my own preferences but on those of my friends (though of course you'd expect that I'd associate with people who have preferences similar to mine, so this does not make the fallacy much more excusable).

I think the main belief I've updated based on this is my estimate on the prevalence of my sort of individualistic, suspicious-of-groups, allergic-to-crowds, solitude-valuing outlook in the Less Wrong community, which I have adjusted strongly downward (although that adjustment has been tempered by the suspicion, confirmed by a couple of comments on this post, that people who object to things such as rituals etc. often simply don't speak up).

I have also been reminded of something I guess I knew but hadn't quite absorbed, which is that, apparently, many people in aspiring rationalist communities come from religious backgrounds. This of course makes sense given the base rates. What I didn't expect is that people would value the ritual trappings of their religious upbringing, and value them enough to construct new rituals with similar forms.

I will also add that despite this evidence that way more people like rituals than I'd have expected, and my adjustment of my beliefs about this, I am still unable to alieve it. Liking ritual, experiencing a need for and enjoyment of collectivized sacredness, is completely alien to me to the point where I am unable to imagine it.

Comment author: moshez 24 December 2012 09:19:49PM 2 points [-]

Thanks! You have already updated, so I'm not sure if you want to update further, but I'm wondering if you had read Why our kind can't cooperate, and what your reaction to that was?

View more: Next