It's probably not a good idea to laugh at people until you've at least heard their arguments. It is at the very least very bad signaling for an intellectual community to dismiss a small body of work because a sentence on Wikipedia (source unknown) makes it sound silly.
Remember that LW sounds pretty silly on Rational Wiki.
You mean that the graduate student of the philosophy of logic doesn't know about things like math and theories of truth? That seems unlikely to me.
We can have papers published about this, we really ought to be able to get papers published about Friendly AI subproblems.
Are you implying that you are trying to get papers published about Friendly AI subproblems and having difficulty?
Pfft, I don't see what's so funny about the end. If it had been , alright, that would have been somewhat ironic at least, but
)? Nobody was even arguing against that.
In college I was part of the cult of Alfred the Duck. It was a religion with five or so members, formed when our founder decided to take False as an axiom, and also drew a little picture of a duck. Using the holy T=F axiom, it's easy to prove that Alfred the Duck knows all and sees all, and that everything both exists and doesn't exist. It actually worked pretty well as a religion. (There was also something about welcoming alien invaders, but I think that was a different religion.)
That seems to be a practical accomplishment of trivialist philosophy.
To paraphrase something Eliezer said to me in person, "Here's one more thing philosophers have written more papers about than reflective decision theory."
I am a proponent of Wednesdayism. "Wednesdayism is the view that true is true and false is false except, crucially, on Wednesdays."
Strict Wednesdayism is undefined on Wednesdays. Orthodox Wednesdayism is false on Wednesdays. Reformed Wednesdayism requires you to personally decide if it is true on Wednesdays.
Heh.
My own view is that this argument is as convincing as an argument for any philosophically interesting position, and so should be taken seriously. Trivialism should not be treated as a special case in this regard. Philosophers have committed to claims on the basis of a lot less.
It was definitely worth skimming through. Two... well, not really questions, but thoughts:
How does trivialism differ from assuming the existence of a Tegmark IV universe?
A spectral argument given in defense of trivialism in the dissertation runs like this:
a. Natural language is inconsistent.
b. Therefore, by explosion, every sentence in natural language is true.
c. Every classical proposition may be interpreted in natural language.
d. Therefore, classical logic is inconsistent.
The error in the argument is actually quite subtle!
That must be a hoax. Tell me it's a hoax! [takes a look at the references] No, it isn't a hoax. What the ...
Graham Priest interview with Julia Galef and Massimo Pigliucci on paraconsitency and dialetheism:
http://rationallyspeaking.blogspot.de/2012/11/rationally-speaking-podcast-graham.html
Does anyone know if trivialism has to be interpreted as "every sentence is at least true" or as "every sentence is true and only true"?
sighs This is why, when one studies philosophy, one does well to pretty much ignore anyone who claims that the universe is completely knowable a priori.
In fact, given the number of sets of axioms from which one can derive statements, pretty much any argument that hinges primarily on a priori claims is probably mistaken. (Note: This is an a posteriori claim.)
Sorry, not my intention to strawman. It is alien to me.
Doesn't the strict rationalist have trouble with the truth value of statements conditioned on false statements?
No. Not bayesians, at any rate.
You are looking for a philosophy which tells you what the indicated course of action is. That means that trivialism is poorly suited for you.
What's an "indicated" course of action? How is it different from "what you should do", below?
You are looking for a philosophy because you want your philosophy to tell you what you should do. That means that trivialism is the perfect philosophy for you to practice.
What does trivialism predict? What does it tell us to do? Does trivialism let me predict anything more accurately than any other theory? A single instance of one thing that it would predict more accurately and/or reliably in reality than any other theory would make it instantly much less worthy of derision.
At present, it is to me nothing more than a humorous thought experiment similar to "This sentence is false."
When you try to make predictions, use a philosophy that performs predictions well. Bayesian rationality provides many useful tools to determine what the expected results are, but no tools to determine which expected result to choose. Trivialism provides tools more well suited for deciding in the absence of information.
Straight from Wikipedia.
I just had to stare at this a while. We can have papers published about this, we really ought to be able to get papers published about Friendly AI subproblems.
My favorite part is at the very end.
Trivialism is the theory that every proposition is true. A consequence of trivialism is that all statements, including all contradictions of the form "p and not p" (that something both 'is' and 'isn't' at the same time), are true.[1]
[edit]See also
[edit]References
[edit]Further reading