I'm trans (MTF) and bi leaning a little bit toward gay.
I'm better with non-real-time communication than with things like IM, but in any case feel free to PM me if you'd like to ask anything.
I'm trans (MTF) and bi leaning a little bit toward gay.
I'm better with non-real-time communication than with things like IM, but in any case feel free to PM me if you'd like to ask anything.
I keep scratching my head over this comment made by Vladimir Nesov in the discussion following “A Rationalist’s Tale”. I suppose it would be ideal for Vladimir himself to weigh in and clarify his meaning, but because no objections were really raised to the substance of the comment, and because it in fact scored nine upvotes, I wonder if perhaps no one else was confused. If that’s the case, could someone help me comprehend what’s being said?
My understanding is that it’s the LessWrong consensus that gods do not exist, period; but to me the comment seems to imply that magical gods do in fact exist, albeit in other universes… or something like that? I must be missing something.
"Magical gods" in the conventional supernatural sense generally don't exist in any universes, insofar as a lot of the properties conventionally ascribed to them are logically impossible or ill-defined, but entities we'd recognize as gods of various sorts do in fact exist in a wide variety of mathematically-describable universes. Whether all mathematically-describable universes have the same ontological status as this one is an open question, to the extent that that question makes sense.
(Some would disagree with referring to any such beings as "gods", e.g. Damien Broderick who said "Gods are ontologically distinct from creatures, or they're not worth the paper they're written on", but this is a semantic argument and I'm not sure how important it is. As long as we're clear that it's probably possible to coherently describe a wide variety of godlike beings but that none of them will have properties like omniscience, omnipotence, etc. in the strongest forms theologians have come up with.)
Unless there are two horseshoe quotes, this one seems to be disputed:
I (or someone) should update that page; the earliest source of the horseshoe story that I know of is from a 1927 essay by Heisenberg:
Niels closed the conversation with one of those stories he liked to tell on such occasions: "One of our neighbors in Tisvilde once fixed a horseshoe over the door to his house. When a mutual acquaintance asked him, 'But are you really superstitious? Do you honestly believe that this horseshoe will bring you luck?' he replied, 'Of course not; but they say it helps even if you don't believe it.'"
Edit: Actually that date is almost definitely wrong, the essay refers to a conference that took place in 1927, probably wasn't given there. The earliest Google Books result for this quote is Heisenberg's 1969 autobiography, though, so that's still earlier and more authoritative than any of the sources given on the Wikiquote page.
I think the Master Plan is to mostly leave religion out of the books he's writing instead, or at least out of one of them. Anyone else remember reading something along these lines?
112 in 5 windows.
I regularly let tabs proliferate until I have a dozen windows open and hundreds of tabs between them and the browser gets so slow that I have to restart it or it just crashes. When this happens, I usually don't feel like waiting for hundreds of tabs to reload, so I move the saved browser session aside, telling myself I'm just making a "temporary" new session, and then the same thing happens and I never revisit any of my past sessions. I have browser session files dating back to… 2007? That can't be right, this has been going on for way longer than that… maybe I have the older ones on a backup somewhere.
Anyway, I think I have a problem.
I had read Overcoming Bias, very sporadically, and without really keeping track of authors or reading things in sequence, for a couple years before I found LW through it around July or August 2009 (at which point I started reading it more systematically).
Earlier in 2009 I had read "The Singularity Is Near", which was my first interaction with transhumanism/singularitarianism, and I was an excited Kurzweilian for a bit, which probably primed me to be particularly interested when I found out that the most prolific blogger on these awesome blogs of victory was the cofounder of something called the Singularity Institute.
If you invent a breakthrough in artificial intelligence, so machines can learn, that is worth 10 Microsofts.
I can only assume he wasn't actually talking about an AGI-level breakthrough. I don't think I'd expect him to underestimate the impact or value of AGI that severely.
The comment indicating embarrassment seems to suggest a norm.
I assumed that was more based on cultural norms than LW norms. Generally people don't discuss their IQs in polite company (or potentially-high-variance-IQ company, maybe), especially high IQs, because of the risk of being seen as bragging about something that other people may not view as high-status. In discussions outside LW I've heard people be somewhat condescending toward people who even admit to having gotten their IQs tested, as it's often associated with intellectual pretension. (And, in turn, being seen as claiming high status in a way that actually marks one as low-status is associated with social unawareness.)
Are we as a community setting up social norms against knowledge now?
One (currently slightly downvoted) comment doesn't seem like much of an indicator of a growing community social norm. Does anything else give you that impression?
Fair point. I can't do much about the fact that the desired untraceability doesn't survive a few minutes casual googling, but I can avoid being personally involved with the fact. Retracted.
You didn't delete the comment though, it's still visible.