Comment author: spencerth 06 January 2011 11:09:59AM *  17 points [-]

Though I agree with you strongly, I think we should throw the easy objection to this out there: high-quality, thorough scholarship takes a lot of time. Even for people who are dedicated to self-improvement, knowledge and truth-seeking (which I speculate this community has many of), for some subjects, getting to the "state of the art"/minimum level of knowledge required to speak intelligently, avoid "solved problems", and not run into "already well refuted ideas" is a very expensive process. So much so that some might argue that communities like this wouldn't even exist (or would be even smaller than they are) if we all attempted to get to that minimum level in the voluminous, ever-growing list of subjects that one could know about.

This is a roundabout way of saying that our knowledge-consumption abilities are far too slow. We can and should attempt to be widely, broadly read knowledge-generalists and stand on the shoulders of giants; climbing even one, though, can take a dauntingly long time.

We need Matrix-style insta-learning. Badly.

Comment author: greim 09 January 2011 08:21:54PM 3 points [-]

need Matrix-style insta-learning. Badly.

Hear, hear! Arguably, resources like Wikipedia, the LW sequences, and SEP (heck even Google and the internet in general) are steps in that general direction.

Comment author: Psychohistorian 01 July 2009 07:45:33PM *  0 points [-]

I doubt you have spoken to many untheists if you don't expect them to have categorical imperatives.

An example would be nifty.

I tend to read "categorical imperative" in the strongest, Kantian sense, an imperative statement that is a priori valid irrespective of context or reasoning - i.e. murder isn't wrong because people don't like it, or because it reduces happiness, or because it makes baby Jesus cry; murder is just wrong and you just shouldn't do it period. Perhaps I should not have raised that distinction without defining what I meant more rigorously. Unless there's some counterexample I'm overlooking, of course.

Comment author: greim 02 July 2009 07:11:12PM *  0 points [-]

I tend to read "categorical imperative" in the strongest, Kantian sense, an imperative statement that is a priori valid irrespective of context or reasoning - i.e. murder is just wrong and you just shouldn't do it period.

If "murder"=="the wrong kind of killing" then "the wrong kind of killing is just wrong and you just shouldn't do it period" is a tautology. It would seem you can get cheap categorical imperatives by jumping to tautologies, but they're mostly useless since you still have to establish whether it's murder in the first place (presumably by resorting to context and/or reasoning).

I suspect non-tautological categorical ethical imperatives are rare, and furthermore hotly disputed among ethicists. For example some groups hold that "killing is categorically wrong," but that view is under heavy debate.

Edit: I retract my statement about non-tautological categorical ethical imperatives being rare, at least in per capita terms. Anecdotally, premarital sex and disobeying your parents would seem to be examples of things that are widely held to be categorically wrong, but certainly not universally agreed-upon.

In response to Closet survey #1
Comment author: Sebastian_Hagen 15 March 2009 07:40:30PM *  23 points [-]

I don't know how many people here would agree with the following, but my position on it is extreme relative to the mainstream, so I think it deserves a mention:

As a matter of individual rights as well as for a well working society, all information should be absolutely free; there should be no laws on the collection, distribution or use of information.

Copyright, Patent and Trademark law are forms of censorship and should be completely abolished. The same applies to laws on libel, slander and exchange of child pornography.

Information privacy is massively overrated; the right to remember, use and distribute valuable information available to a specific entity should always override the right of other entites not to be embarassed or disadvantaged by these acts.

People and companies exposing buggy software to untrusted parties deserve to have it exploited to their disadvantage. Maliciously attacking software systems by submitting data crafted to trigger security-critical bugs should not be illegal in any way.

Limits: The last paragraph assumes that there are no langford basilisks; if such things do in fact exist, preventing basilisk deaths may justify censorship - based on the purely practical observation that fixing the human mind would likely not be possible shortly after discovery.

All of the stated policy opinions apply to societies composed of roughly human-intelligent people only; they break down in the presence of sufficiently intelligent entities.

In addition, if it was possible to significantly ameliorate existential risks by censorsing certain information, that would justify doing so - but I can't come up with a likely case for that happening in practice.

Comment author: greim 25 April 2009 06:38:32PM 10 points [-]

Isn't yelling "fire!" in a crowded theater a kind of langford basilisk?