Today's post, Is Morality Preference? was originally published on 05 July 2008. A summary (taken from the LW wiki):
A dialogue on the idea that morality is a subset of our desires.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Moral Complexities, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Where would mathematical statements fit in this classification of yours? They can be proven, but many of them can't tested and even for the ones that can be tested the proof is generally considered better evidence than the test.
In fact, you are implicitly relying on a large untested (and mostly untestable) framework to describe the relationship between whatever sense input constitutes the result of one of your tests, and the proposition being tested.
This may be a situation where the modern world's resources start to break down the formerly strong separation between mind and world.
These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I've implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.
These modern machines seem to render the statements within axiomatic mathematical systems as testable and falsifiable as any other physical facts.