NcyRocks
NcyRocks has not written any posts yet.

NcyRocks has not written any posts yet.

I believe your Amazon rankings example refers to Ryan North's (and his coauthors') Machine of Death.
Thanks for explaining - I think I understand your view better now.
I guess I just don't see the trolley problem as asking "Is it right or wrong, under all possible circumstances matching this description, to pull the lever?" I agree that would be an invalid question, as you rightly demonstrated. My interpretation is that it asks "Is it right or wrong, summed over all possible circumstances matching this description, weighted by probability, to pull the lever?" I.e. it asks for your prior, absent any context whatsoever, which is a valid question.
Under that interpretation, the correct answer of "sometimes pull the lever" gets split into "probably pull the lever" and "probably don't pull... (read more)
Thanks for the welcome!
I disagree that anyone who poses an ethical thought experiment has a burden to supply a realistic amount of context - simplified thought experiments can be useful. I'd understand your viewpoint better if you could explain why you believe they have that burden.
The trolley problem, free from any context, is sufficient to illustrate a conflict between deontology and utilitarianism, which is all that it's meant to do. It's true that it's not a realistic problem, but it makes a valid (if simple) point that would be destroyed by requiring additional context.
It's easy to respond to a question that doesn't contain much information with "It depends" (which is equivalent to saying "I don't know"), but you still have to make a guess. All else being the same, it's better to let 1 person die than 5. Summed over all possible worlds that fall under that description, the greatest utility comes from saving the most people. Discovering that the 1 is your friend and the 5 are SS should cause you to update your probability estimate of the situation, followed by its value in your utility function. Further finding out that the SS officers are traitors on their way to assassinate Hitler and your... (read more)
I commend your vision of LessWrong.
I expect that if something like it is someday achieved, it'll mostly be done the hard way through moderation, example-setting and simply trying as hard as possible to do the right thing until most people do the right thing most of the time.
But I also expect that the design of LessWrong on a software level will go a long way towards enabling, enforcing and encouraging the kinds of cultural norms you describe. There are plenty of examples of a website's culture being heavily influenced by its design choices - Twitter's 280-character limit and resulting punishment of nuance comes to mind. It seems probable that LessWrong's design could... (read 513 more words →)