For eating at people's houses: usually people will have enough side-dishes that if one does not make a big deal of it, one can fill up on non-meat dishes. At worst, there's always bread.
For going to steakhouse -- yes, but at every other place, there's usually a vegetarian option, if one tries hard enough.
It does make a good case for being an unannoying vegetarian...but being a strict-vegetarian is a useful Schelling point.
Of course e can be evidence even if P(X|e)=P(X) -- it just cannot be evidence for X. It can be evidence for Y if P(Y|e)>P(Y), and this is exactly the case you describe. If Y is "there is a monument and left is red or there is no monument and left is black", then e is (infinite, if Omega is truthful with probability 1) evidence for Y, even though it is 0 evidence for X.
Similarly, you watching your shoelace untied is zero evidence for my shoelaces...
No, it is not surprising... I'm just saying that saying that the semantics is impoverished if you only use finite syntactical proof, but not to any degree that can be fixed by just being really really really smart.
bryjnar: I think the point is that the metalogical analysis that happens in the context of set theory is still a finite syntactical proof. In essense, all mathematics can be reduced to finite syntactical proofs inside of ZFC. Anything that really, truly, requires infinite proof in actual math is unknowable to everyone, supersmart AI included.
Here's how I visualize Goedel's incompleteness theorem (I'm not sure how "visual" this is, but bear with me): I imagine the Goedel construction over the axioms of first-order Peano arithmetic. Clearly, in the standard model, the Goedel sentence is true, so we add G to the axioms. Now we construct G' a Goedel sentence in this new set, and add G'' as an axiom. We go on and on, G''', etc. Luckily that construction is computable, so we add G^w as a Goedel sentence in this new set. We continue on and on, until we reach the first uncomputable countable...
I'm trying to steelman your arguments as much as I can, but I find myself confused. The best I can do is: "I'm worried that people would find LW communities unwelcoming if they do not go to rituals. Further, I'm worried that rituals are a slippery-slope: once we start having rituals, they might start being the primary activity of LW and make the experience unwelcoming even if non-ritual activities are explicitly open, because it feels more like 'a Church group that occasionally has secular activities. I'm worried that this will divide people into thos...
Sorry, that's not the context at which I meant it -- I'm sure you're as willing to admit you were wrong as the next rationalist. I mean it in the context of "Barbarians vs. Rationalists" -- if group cohesion is increased by ritual, and group cohesion is useful to the rationality movement, than ritual could be useful. Wanting to dissociate ourselves from the trappings of religion seems like a case of "reversed stupidity" to me...
The same bias to...what? From the inside, the AI might feel "conflicted" or "weirded out" by a yellow, furry, ellipsoid shaped object, but that's not necessarily a bug: maybe this feeling accumulates and eventually results in creating new sub-categories. The AI won't necessarily get into the argument about definitions, because while part of that argument comes from the neural architecture above, the other part comes from the need to win arguments -- and the evolutionary bias for humans to win arguments would not be present in most AI designs.
Thanks! You have already updated, so I'm not sure if you want to update further, but I'm wondering if you had read Why our kind can't cooperate, and what your reaction to that was?
I used to have a group of friends (some closer than others), and we would all get together and play Settlers of Catan a given day of the week (~4 years ago, I don't remember which day it was). It consisted of the "same thing" (obviously the game turned out differently every week, but still) every week. There was not really room for "nonparticipation" in the sense that if you wanted to hang out with these people that day, you played Catan. Would it upset you if you learned that there was a regular meetup of Catan LW enthusiasts who meet ...
Which assumptions generated the incorrect predictions? Are you pulling your Bayesian updates backwards through the belief-propogation network given this new evidence? (In other words: updating on a small probability event should change your mind about a whole host of related beliefs.)
Thanks for posting the ritual booklet. It's fascinating. With my wife being pregnant, I start looking at things through the eyes of a parent to be. Rituals are traditionally a super-familial thing, but including the whole family. Parents take their kids to Church. Parents light the Menorah with their kids. Parents celebrate Winter Soltice with their kids. Reading through the booklets, I constantly had to revise upwards the age at which I could first take my daughter to such a gathering. There's no "minimum age" to participate in Church, or the li...
BTW: By Geneva convention standard, Gallileo was tortured -- "For the purposes of this Convention, torture means any act by which severe pain or suffering, whether physical or mental, is intentionally inflicted on a person" (notice the "or mental", and it's certainly mental torture to be threatened by physical torture). It seems like at this point we are arguing about definitions, so maybe I'll stop here, but calling the relevant line in "Word of God" false because of that is a bit of an exaggeration.
Is there a new version of the songbook?
If you like spoilers, google "Lowenheim-Skoler" -- the same technique as the proof for the "upwards" part allows you to generate non-standard models for the First-order logic version of Peano axioms in a fairly straight-forward manner.
It grieves me to note that almost all the arguments in your post could be applied, mutatis mutandis, to why we should teach kids intelligent design as well as evolution.
I am looking forward the the ebooks. I hope you'll provide them in ePub format, for those of us who prefer that. [I was pleased to donate $40, which should soon be matched by my employer as part of the employee-match program, thus getting me double-matched!]
"of all the abilities that humans are granted by their birth this is the one you perform the worst" -- This seems like an odd comparison. Can you really compare my ability to, say, tell stories to 'mind-reading'? It's like comparing my ability to walk to my ability to jump straight up: I can walk for miles, but I can only jump straight up a meter or so -- a 1000:1 ratio -- but I do not feel particularly bad at my ability to jump.
I would definitely believe the AI, but I already believe it, if it said "humans are worse at discerning states of ...
Woo, I found who wrote it. I enjoyed reading it a lot. I liked that the "utopia" showed how utopic utopia can be while still showing the dangers in even slightly badly formed goals.
As far as complexity-of-logic-theories-for-reason-of-believing-in-them, that should be proportional to the minimal Turing machine that would check if something is an axiom or not. (Of course, in the case of a finite list, approximating it to the total length of the axioms is reasonable, because the Turing machine that does "check if input is equal to following set:" followed by set adds a constant size to the set -- but that approximation breaks down badly for infinite axiom schema).