Politics is the mind-killer; but rationality is the science of /winning/, even when dealing with political issues.
I've been trying to apply LessWrong and Bayesian methods to the premises and favored issues of a particular political group. (Their most basic premise is roughly equivalent to declaring that Iterated Prisoner's Dilemma programs should be 'nice'.) But, given how quickly my previous thread trying to explore this issue was downvoted into disappearing, and many of the comments I've received on similar threads, I may have a rather large blind spot preventing me from being able /to/ properly apply LW methods in this area.
So I'll try a different approach - instead of giving it a go myself again, I'll simply ask, what do /you/ think a good LW post about liberty, freedom, and fundamental human rights would look like?
The two ways of putting it are not equivalent; it is possible for a sapient mind to decide that its purpose is to maximize the number of paperclips in the universe, which can be achieved without its continued existence. You probably realize this already though; the last quoted sentence makes sense.
If you had a chance to preform an action that led to a slight risk to your life but increased the chance of sapience continuing to exist (in such a way as to lower your overall chance of living forever), would you do so? It is usually impossible to perfectly optimize for two different things at once; even if hey are mostly unopposed, near the maxima there will be tradeoffs.
A good question.
I have at least one datum suggesting that the answer, for me in particular, is 'yes'. I currently believe that what's generally called 'free speech' is a strong supporting factor, if not necessary prerequisite, for developing the science we need to ensure sapience's survival. Last year, there was an event, 'Draw Muhamma... (read more)