Vaniver comments on Simulate and Defer To More Rational Selves - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (112)
I find it interesting that Less Wrong appears to be rediscovering existing ethical theories.
This article argues for a form of virtue ethics arising from utilitarianism - in order to be a good person, simulate an alternate self free of whatever desire is applicable, and then use them as a moral exemplar.
Similarly, Elizer's arguments for Coherent Extrapolated Volition in FAI bear a striking resemblance to Rousseau's arguments regarding the collective will of a state.
Another example of this that springs to mind is this less-popular post on beeminding sin. http://lesswrong.com/lw/hwm/beeminding_sin/
We have a large body of collected philosophical thought available to us. At least some of those concepts are adaptable to everyday problems and therefore useful things to carry around in your mind. However, biases exist that make many people hesitant to listen to historical sources: "In the past, people had less technology than we do" is often conflated into "in the past, people were less intelligent than we are."
Even if we accept that people in the past were less intelligent, that still doesn't rule out that they may have had some good ideas. If it did, then we would be able to make arguments from human reasoning. "People from the past said it" is not an argument for or against a topic any more than "Hitler said it." (Note that this also is an argument against the failure mode of treating these ideas as the "wisdom of the ancients.")
It seems to me that a general critical reading of acclaimed philosophers could save everyone in the Less Wrong community a lot of trouble reinventing their ideas, given that examining an already-stated hypothesis is a lot easier than going out and finding one.
I think the more serious issue is that the body of collected philosophical thought is too large. That is:
It's not obvious to me that this is true. I think there's a large benefit from a single person doing a deep dive on something, and reporting the results: "This is what I learned reading Rousseau that's relevant to rationality." This way all the community needs to do to learn about Rousseau's connection to rationality (on a conversational level, at least) is read the post, and if they see a specific idea and think "I want to read more about that," then they know exactly where to start.
(I follow this advice and write reviews of books for LW; my interests are in decision-making, and so that's where my reviews are. If your interests are in philosophy, that's a good way to contribute significant value to the community, and earn a bunch of karma in the process.)