Comment author: XFrequentist 12 September 2015 07:08:50PM 1 point [-]

I got waaay too far into this before I realized what you were doing... so well done!

Comment author: Kawoomba 12 September 2015 08:22:30PM 0 points [-]

What are you talking about?

Comment author: Vaniver 11 September 2015 08:50:08PM 2 points [-]

Light touch indeed. They fucked it up so badly

Eh... the story preceding that rebellion argues, if anything, that the Company tried too hard to bend to local practices, and the British public was outraged that "Clemency Canning" didn't want to come down like a hammer on the natives.

Comment author: Kawoomba 11 September 2015 08:58:46PM 0 points [-]

History can be all things to all people, like the shape of a cloud it's a canvas on which one can project nearly any narrative one fancies.

Comment author: btrettel 11 September 2015 08:19:35PM 0 points [-]

RationalWiki discusses a few:

Another problem of LessWrong is that its isolationism represents a self-made problem (unlike demographics). Despite intense philosophical speculation, the users tend towards a proud contempt of mainstream and ancient philosophy[39] and this then leads to them having to re-invent the wheel. When this tendency is coupled with the metaphors and parables that are central to LessWrong's attraction, it explains why they invent new terms for already existing concepts.[40] The compatibilism position on free will/determinism is called "requiredism"[41] on LessWrong, for example, and the continuum fallacy is relabeled "the fallacy of gray." The end result is a Seinfeldesque series of superfluous neologisms.

In my view, RationalWiki cherry picks certain LessWrongers to bolster their case. You can't really conclude that these people represent LessWrong as a whole. You can find plenty of discussion of the terminology issue here, for example, and the way RationalWiki presents things makes it sound like LessWrongers are ignorant. I find this sort of misrepresentation to be common at RationalWiki, unfortunately.

Comment author: Kawoomba 11 September 2015 08:56:48PM 15 points [-]

Their approach reduces to an anti-epistemic affect-heuristic, using the ugh-field they self-generate in a reverse affective death spiral (loosely based on our memeplex) as a semantic stopsign, when in fact the Kolmogorov distance to bridge the terminological inferential gap is but an epsilon.

Comment author: Kawoomba 16 August 2015 08:01:09AM 0 points [-]

Good content, however I'd have preferred "You Are A Mind" or similar. You are an emergent system centered on the brain and influences upon it, or somesuch. It's just that "brain" has come to refer to 2 distinct entities -- the anatomical brain, and then the physical system generating your self. The two are not identical.

Comment author: PhilGoetz 16 August 2015 02:25:52AM *  3 points [-]

Downvoted for being deliberately insulting. There's no call for that, and the toleration and encouragement of rationality-destroying maliciousness must be stamped out of LW culture. A symposium proceedings is not considered as selective as a journal, but it still counts as publication when it is a complete article.

Comment author: Kawoomba 16 August 2015 07:54:12AM -3 points [-]

Well, I must say my comment's belligerence-to-subject-matter ratio is lower than yours. "Stamped out"? Such martial language, I can barely focus on the informational content.

The infantile nature of my name calling actually makes it easier to take the holier-than-thou position (which my interlocutor did, incidentally). There's a counter-intuitive psychological layer to it which actually encourages dissent, and with it increases engagement on the subject matter (your own comment nonwithstanding). With certain individuals at least, which I (correctly) deemed to be the case in the original instance.

In any case, comments on tone alone would be more welcome if accompanied with more remarks on the subject matter itself. Lastly, this was my first comment in over 2 months, so thanks for bringing me out of the woodwork!

I do wish that people were more immune to the allure of drama, lest we all end up like The Donald.

Comment author: ChristianKl 06 June 2015 05:27:57PM 3 points [-]

b) Effective altruists don't want to upset their own System 1 sensibilities, their altruistic efforts would lose some of the fuzzies driving them if they needed to justify "mass sterilisation of third world countries" to themselves.

I think the likely result of any attempt of a mass sterilisation project is increased population because you don't get it to work but Western doctors in the third world lose credibility.

Will we influence that decisions only based on "provide better education, then hope for the best",

We actually have good data that better education decreases birth rates.

Comment author: Kawoomba 06 June 2015 05:56:31PM 1 point [-]

Certainly, within what's Good (tm) and Acceptable (tm), funding better education in the third world is the most effective method.

However, if you go far enough outside the Overton window, you don't need credibility, as long as the power asymmetry is big enough. You want food? It only comes with a chemical agent which sterilizes you, similar to Golden Rice. You don't need to accept it, you're free to starve. The failures of colonialism as well as the most recent forays into the middle east stem from the constraints of also having to placate the court of public opinion.

Regardless of this one example, are you taking the position of "the most effective methods are those within the Overton window"? That would be typical, but the actual question would be: Is it because changing the Overton window to include more radical options is too hard, or is it because those more radical options wouldn't feel good?

Comment author: Kawoomba 06 June 2015 04:54:30PM *  4 points [-]

I too have the impression that for the most part the scope of the "effective" in EA refers to "... within the Overton window". There's the occasional stray 'radical solution', but usually not much beyond "let's judge which of these existing charities (all of which are perfectly societally acceptable) are the most effective".

Now there are two broad categories to explain that:

a) Effective altruists want immediate or at least intermediate results / being associated with "crazy" initiatives could mean collateral damage to their efforts / changing the Overton window to accommodate actually effective methods would be too daunting a task / "let's be realistic", etc.

b) Effective altruists don't want to upset their own System 1 sensibilities, their altruistic efforts would lose some of the fuzzies driving them if they needed to justify "mass sterilisation of third world countries" to themselves.

Solutions to optimization problems tend to set to extreme values all those variables which aren't explicitly constrained. The question then is which ideals we're willing to sacrifice in order to achieve our primary goals.

As an example, would we really rather have people decide just how many children they want to to create, only to see those children perish in the resulting population explosion? Will we influence that decisions only based on "provide better education, then hope for the best", in effect preferring starving families with the choice to procreate whenever to non-starving families without said choice?

I do believe it would be disastrous for EA as a movement to be associated with ideas too far outside the Overton window, and that is a tragedy, because it massively restricts EA's maximum effectiveness.

Comment author: Kawoomba 01 June 2015 08:58:58PM 13 points [-]

MIRI continues to be in good hands!

Comment author: Kawoomba 19 May 2015 04:49:58PM *  9 points [-]

I'm not sure LW is a good entry point for people who are turned away by a few technical terms. Responding to unfamiliar scientific concepts with an immediate surge of curiosity is probably a trait I share with the majority of LW'ers. While it's not strictly a prequisite for learning rationality, it certainly is for starting in medias res.

The current approach is a good selector for dividing the chaff (well educated because that's what was expected, but no true intellectual curiosity) from the wheat (whom Deleuze would call thinkers-qua-thinkers).

HPMOR instead, maybe?

Comment author: Lumifer 11 May 2015 04:23:52PM *  5 points [-]

I have a feeling a lot of discussions of life extension suffer from being conditioned on the implicit set point of what's normal now.

Let's imagine that humans are actually replicants and their lifespan runs out in their 40s. That lifespan has a "control dial" and you can turn it to extend the human average life expectancy into the 80s. Would all your arguments apply and construct a case against meddling with that control dial?

Comment author: Kawoomba 11 May 2015 04:39:15PM *  3 points [-]

That's a good argument if you were to construct the world from first principles. You wouldn't get the current world order, certainly. But just as arguments against, say, nation-states, or multi-national corporations, or what have you, do little do dissuade believers, the same applies to let-the-natural-order-of-things-proceed advocates. Inertia is what it's all about. The normative power of the present state, if you will. Never mind that "natural" includes antibiotics, but not gene modification.

This may seem self-evident, but what I'm pointing out is that by saying "consider this world: would you still think the same way in that world?" you'd be skipping the actual step of difficulty: overcoming said inertia, leaving the cozy home of our local minimum.

View more: Prev | Next