...I was probably wrong(?); could you explain what you mean, please?
Those stories are not about something other than themselves and the rules/process/structure of storytelling. I felt they could match your request for something sufficiently meta.
Typical superheroes act like lymphocytes, while the bad aliens, against whom they fight, are antigens with no real planning beyond 'stake ground & multiply'; police in the stories are like interferon, and mass media are highly specialized, short lived antibodies, while the public shows general symptoms of inflammation if the situation spreads. Looping!days are malaria-like illnesses, apocalypses are...okay, there are really too many options...and mind control is, more or less, AIDS. Traveling between different universes is contagion.
Now, are there any stories sufficiently meta to be strictly epidemiological?
Cabin in the Woods? Stranger than Fiction? Funny Games?
Some months ago someone mentioned a chat website that tracked arguments in syllogism form to help people organize their debates. Does anyone remember what it was called?
I think the point may be: LW orthodoxy, in so far as there is such a thing, says to choose SPECKS over TORTURE [EDITED to add:] ... no, wait, I mean the exact opposite, TORTURE over SPECKS ... and ONE BOX over TWO BOXES, and that combining these in ike's rather odd scenario leads to the conclusion that we should prefer "torture everyone in the universe" over "dust-speck everyone in the universe" in that scenario, which might be a big enough bullet to bite to make some readers reconsider their adherence to LW orthodoxy.
My own view on this, for what it's worth, is that all my ethical intuitions -- including the one that says "torture is too awful to be outweighed by any number of dust specks" and the one that says "each of these vastly-many transitions from which we get from DUST SPECKS to TORTURE is a strict improvement" -- have been formed on the basis of experiences (my own, my ancestors', earlier people in the civilization I'm a part of) that come nowhere near to this sort of scenario, and I don't trust myself to extrapolate. If some incredibly weird sequence of events actually requires me to make such a choice for real then of course I'll have to make it (for what it's worth, I think I would choose TORTURE and ONE BOX in the separate problems and DUST SPECKS in this one, the apparent inconsistency notwithstanding, not least because I don't think I could ever actually have enough evidence to know something was a truly perfect truthful predictor) but I think its ability to tell me anything insightful about my values, or about the objective moral structure of the universe if it has one, is very very very doubtful.
LW orthodoxy, in so far as there is such a thing, says to choose SPECKS over TORTURE
No, Eliezer and Hanson are anti-specks.
Though I enthusiastically endorse the concept of rationality, I often find myself coming to conclusions about Big Picture issues that are quite foreign to the standard LW conclusions. For example, I am not signed up for cryonics even though I accept the theoretical arguments in favor of it, and I am not worried about unfriendly AI even though I accept most of EY's arguments.
I think the main reason is that I am 10x more pessimistic about the health of human civilization than most other rationalists. I'm not a cryonicist because I don't think companies like Alcor can survive the long period of stagnation that humanity is headed towards. I don't worry about UFAI because I don't think our civilization has the capability to achieve AI. It's not that I think AI is spectacularly hard, I just don't think we can do Hard Things anymore.
Now, I don't know whether my pessimism is more rational than others' optimism. LessWrong, and rationalists in general, probably have a blind spot relative to questions of civilizational inadequacy because those questions relate to political issues, and we don't talk about politics. Is there a way we can discuss civilizational issues without becoming mind-killed? Or do we simply have to accept that civilizational issues are going to create a large error bar of uncertainty around our predictions?
Is there a way we can discuss civilizational issues without becoming mind-killed?
A LWer created Omnilibrium for that.
I'm the person who encouraged OrphanWilde to post that PM. I'm quite grateful that he did because this has recast the situation.
I don't know about anyone else, but my impression of Gleb is that he's annoying but mostly harmless. Mostly harmless because that early project of trying to promote rationality by turning it into an applause light was definitely a bad idea.
The PM has caused me to do some updating which I hope I will generalize. I started out with an assumption that there might be something wrong with Gleb for attracting that sort of animus, and something wrong with me for not seeing what was wrong with Gleb.
I think OrphanWilde's approach has made LW seem like a place where people can be attacked for unclear reasons, and as moderator, I probably should have moved much earlier to discourage this.
It literally never occurred to me that the animus was (or had shifted to) something strategic, and it wasn't trolling exactly but it was still not an accurate presentation of OrphanWilde's beliefs.
I haven't seen a clear explanation from anyone (though I may have missed some comments) of what they think Gleb misunderstands.
Here's the comment which OW's PM was an answer to. I don't have a copy of what I PM'd to OW, but it was tactful.It took me a while to get from "What the fucking fuck?" to "Thank you for the information". I believe that you can't force a mind. Shaming people isn't a reliable way of getting the behavior you want from them. Neither is anything else, but a light touch has fewer side effects.
I think a fine line needs to be walked when addressing Gleb, if only because he evidently has media visibility skills that could be useful for the community if he were less misguided.
Personally, I tend to parse them as "Look how cynical and worldly-wise I am, how able I am to see through people's pretences to their ugly true motivations. Aren't I clever and edgy?".
I am aware that this is not very charitable of me.
In more charitable mood, I interpret these statements roughly as Lumifer does: Hanson is making claims about why (deep enough down) people do what they do.
Personally, I tend to parse them as "Look how cynical and worldly-wise I am, how able I am to see through people's pretences to their ugly true motivations. Aren't I clever and edgy?".
That's exactly how Hanson sounds to me, and why I tend to read his blog less often now.
If you just deleted it it might still be in your trash can, ready to be brought back.
I always delete permanently. But every detail is still in my head.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
(Note: this is not actually true. But Good and Real recapitulates most of the points you'll see in the philosophy-related sequences, with less focus on the basics and more on elaborating philosophical arguments. If this is the content you want to share, it might be a good choice.)
Added to my Amazon wish list. Do you know of any other books one should be aware of?