"However, yields an even better joke (due to an extra meta level) when preceded by its quotation and a comma", however, yields an even better joke (due to an extra meta level) when preceded by its quotation and a comma.
"Is a even better joke than the previous joke when preceded by its quotation" is actually much funnier when followed by something completely different.
It's nice. It reminds me of home.
I mean, not exactly. Your world seems more advanced than we were in a lot of ways. Your shadarak seem to outclass our kvithion elith - although I'm just basing this on your assertion that you weren't up to shadarak level plus my opinion that you would have made an excellent Priest of Truth - maybe 70th, 80th percentile.
And then in other things it seems so primitive - high-level corruption or at least a media that profits off of scaring people into thinking so and which seems to discuss politics in the vernacular.
And then other things just seem silly. LOL at the giant worldwide skycrane grid when you could have just invented marginally better yurts (and the immense shelflike-treelike skyscraper-frames that allow yurt-sites to be stacked dozens high in dense areas with high land value). As the proverb says, "once you go yurt, you'll never revert".
But the part that hit home (no pun intended) for me the most was your feeling of why me. Like, if someone who actually knew the Risurion-silk backwards and forwards ended up on Earth, they could rewrite the important parts from memory and people could fill it in from there and then it would be smooth sailing and we'd probably have ended up properly manifesting God by this point.
(For a while I half-toyed with the idea that Derek Parfit was that person, but from what I could understand of Reasons and Persons and what I could understand of the Risurion-silk it didn't seem like a good fit)
But I have to hand it to you. Whatever you think your handicaps might have been, you've done a pretty awesome job creating an oasis of sanity with this community, someplace where people from any at-least-marginally competent dimension can go and feel at least sort of at home.
But the weird thing is that reading this comment thread I am starting to get the feeling that there are some people here who aren't from any other dimension at all. I mean, I thought we just never talked about it, on account of the decision theoretic reasons and meta-level concerns. But now I'm starting to consider it possible that many or even most of the commenters on this site, even some of the ones I really respect, actually grew up here.
That would be both really impressive and a little scary.
It seems like the both of you just want everyone to use efficient RVs.
Perhaps a travelling Less Wrong fleet?
Irrationality Game: Less Wrong is simply my Tyler Durden—a disassociated digital personality concocted by my unconcious mind to be everything I need it to be to cope with Camusian absurdist reality. 95%.
I am very curious as to what your evidence for backing up this proposition is or would be.
Others have said this in person; I'll fix both things. Thanks for the feedback!
(I'm used to blogging for a very different audience with short attention spans, a desire for constant entertainment, and a great fear of large blocks of text.)
Okay, this is weird, but the first thing that popped into my head when you mentioned that there were images that used to be from this article was an image of a pony, vaguely Pinkie Pie looking. (being aware of cognition is weird)
I don't even watch My Little Pony or participate in its community. Now I'm starting to wonder if it has evolved into some sort of toxic meme which is replacing itself into generic forms of things.
I would be interested to see if other readers could come up with a more eye-catching description/slogan
A community blog with the purpose of refining the practice of rational behavior?
Eliminates human bias, doesn't imply that rationality is an 'art', and proclaims itself teleologically rather than ontologically.
I think I am currently in this state. (The inducing factor was probably going to a science fiction convention; I'm not sure why this is weirdly inspirational.) Does anybody have a roundup of appropriate posts somewhere?
Can you imagine Harry killing Hermione because Voldemort threatened to plague all sentient life with one barely noticed dust speck each day for the rest of time? Can you imagine killing your own best friend/significant other/loved one to stop the powers of the Matrix from hitting 3^^^3 sentient beings with nearly inconsquential dust specks? Of course not. No. Snap decision.
My breaking point would be about 10 septillion people, which is far, far less... no, wait, that's for a single-event dust speck.
What's your definition of all sentient life? Are we talking Earth, observable universe, or what? What's 'the rest of time'?
3^^^3 is so large that claims on this order of magnitude are hard to judge. See Pascal's Muggle for a discussion of this.
That is indeed my concern. If CFAR can't avoid a Jerry Sandusky/Joe Paterno type scenario (which I am reasonably probable it is capable of, given one of its founders wrote HPMOR), then it is literally a horrendous joke and I should be allocating my contributions to somewhere more productive.
This confuses me. First of all, the probability of such a scenario is tiny (how many universities have the exact same complete lack of safeguards and transparency and how many had an international scandal?) Second, the difference between writing HPMR and the difference between being associated with one of the most prominent universities in the US seems pretty large. A small point that does back up your concerns somewhat- it may be worth noting that the SI early on did have a serious embezzlement problem at one point. But the difference of "has an unmoderated IRC forum where people say hateful stuff" and the scale of a massive coverup of a decade long pedophilia scandal seems pretty clear. Finally, the inability to potentially deal with an unlikely scandal, even if one did have evidence for that, isn't a reason to think that they are incompetent in other ways.
Frankly, it seems as an outside observer that your reaction is likely more connected to the simple fact that these were pretty disgusting statements that can easily trigger a large emotional reaction. But this website is devoted to rationality, and the name of it is Less Wrong. Increasing the world's total existential risk because a certain person who isn't even an SI higher-up or anything similar said some hateful things is not a rational move.
A list of outcomes possible in the future (in order of my preference):
- We create AI which corresponds to my values.
- Life on Earth persists under my value set.
- Life on Earth is totally exterminated.
- Life on Earth persists under its current value set.
- We create an AI which does not correspond to my values.
If LW is not trying to eradicate the scourge of transphobia, than clearly SIAI has moved from 1 to 5, and I should be trying to dismantle it, rather than fund it.
Is your true rejection to funding CFAR or SIAI that they don't have a policy in place for the forum affiliated with them? I'm having a hard time picturing the value system which says "AI risk is the most important place for my charitable dollars, and SIAI is well-poised to turn additional donated dollars into lowered AI risk, but donations should go elsewhere until they alter the policy on their associated internet forum so that a user apologizes for trans-unfriendly comments made offsite."
He could instead mean something closer to "AI risk seems to be an important contribution for charitable dollars, but the SIAI's lack of careful control and moderation of their own fora even given its potential PR risk makes me question whether they are competent enough or organized enough to substantially help deal with AI risk."
But I suspect the value system in question here is actually one where charity is intertwined with signaling and buying fuzzies. In that context, not giving charity to an organization that has had some connection to an individual who says disgusting things (or low-status things) makes sense.
He could instead mean something closer to "AI risk seems to be an important contribution for charitable dollars, but the SIAI's lack of careful control and moderation of their own fora even given its potential PR risk makes me question whether they are competent enough or organized enough to substantially help deal with AI risk."
That is indeed my concern. If CFAR can't avoid a Jerry Sandusky/Joe Paterno type scenario (which I am reasonably probable it is capable of, given one of its founders wrote HPMOR), then it is literally a horrendous joke and I should be allocating my contributions to somewhere more productive.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Can you be slightly more specific on the context? Like, at least the vague fields of study it might apply to? This would allow us to make an informed decision.