It's probably not a good idea to laugh at people until you've at least heard their arguments. It is at the very least very bad signaling for an intellectual community to dismiss a small body of work because a sentence on Wikipedia (source unknown) makes it sound silly.
Remember that LW sounds pretty silly on Rational Wiki.
You mean that the graduate student of the philosophy of logic doesn't know about things like math and theories of truth? That seems unlikely to me.
We can have papers published about this, we really ought to be able to get papers published about Friendly AI subproblems.
Are you implying that you are trying to get papers published about Friendly AI subproblems and having difficulty?
Pfft, I don't see what's so funny about the end. If it had been , alright, that would have been somewhat ironic at least, but
)? Nobody was even arguing against that.
In college I was part of the cult of Alfred the Duck. It was a religion with five or so members, formed when our founder decided to take False as an axiom, and also drew a little picture of a duck. Using the holy T=F axiom, it's easy to prove that Alfred the Duck knows all and sees all, and that everything both exists and doesn't exist. It actually worked pretty well as a religion. (There was also something about welcoming alien invaders, but I think that was a different religion.)
That seems to be a practical accomplishment of trivialist philosophy.
To paraphrase something Eliezer said to me in person, "Here's one more thing philosophers have written more papers about than reflective decision theory."
I am a proponent of Wednesdayism. "Wednesdayism is the view that true is true and false is false except, crucially, on Wednesdays."
Strict Wednesdayism is undefined on Wednesdays. Orthodox Wednesdayism is false on Wednesdays. Reformed Wednesdayism requires you to personally decide if it is true on Wednesdays.
Heh.
My own view is that this argument is as convincing as an argument for any philosophically interesting position, and so should be taken seriously. Trivialism should not be treated as a special case in this regard. Philosophers have committed to claims on the basis of a lot less.
It was definitely worth skimming through. Two... well, not really questions, but thoughts:
How does trivialism differ from assuming the existence of a Tegmark IV universe?
A spectral argument given in defense of trivialism in the dissertation runs like this:
a. Natural language is inconsistent.
b. Therefore, by explosion, every sentence in natural language is true.
c. Every classical proposition may be interpreted in natural language.
d. Therefore, classical logic is inconsistent.
The error in the argument is actually quite subtle!
That must be a hoax. Tell me it's a hoax! [takes a look at the references] No, it isn't a hoax. What the ...
Graham Priest interview with Julia Galef and Massimo Pigliucci on paraconsitency and dialetheism:
http://rationallyspeaking.blogspot.de/2012/11/rationally-speaking-podcast-graham.html
Does anyone know if trivialism has to be interpreted as "every sentence is at least true" or as "every sentence is true and only true"?
sighs This is why, when one studies philosophy, one does well to pretty much ignore anyone who claims that the universe is completely knowable a priori.
In fact, given the number of sets of axioms from which one can derive statements, pretty much any argument that hinges primarily on a priori claims is probably mistaken. (Note: This is an a posteriori claim.)
You appear to be winning this round, so what's the problem?
Eliezer Shlomo Yudkowsky (born September 11, 1979[1]) is an American writer, blogger, and advocate for the development of friendly artificial intelligence[2][3] and the understanding of a possible future singularity. Contents [hide] 1 Biography 2 Work 3 Publications 4 References 5 Further reading 6 External links [edit]Biography
Yudkowsky, who lives in Redwood City, California, [4] did not attend high school and is an autodidact, having no formal education in computer science or artificial intelligence. He co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) in 2000 and continues to be employed as a full-time Research Fellow there.[5] [edit]Work
Yudkowsky's interest focuses on Artificial Intelligence theory for self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures and decision theories for stably benevolent motivational structures (Friendly AI, and Coherent Extrapolated Volition in particular).[6] Apart from his research work, Yudkowsky has written explanations of various philosophical topics in non-academic language, particularly on rationality, such as "An Intuitive Explanation of Bayes' Theorem".[7] Yudkowsky is also a strong proponent of cryonics, the practice of vitrifying one's body after death in the hope of future resuscitation.[8] [edit]Publications
Yudkowsky was, along with Robin Hanson, one of the principal contributors to the blog Overcoming Bias[9] sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found Less Wrong, a "community blog devoted to refining the art of human rationality".[10] The Sequences[11] on Less Wrong, comprising over two years of blog posts on epistemology, Artificial Intelligence, and metaethics, form the single largest bulk of Yudkowsky's writing. He contributed two chapters to Oxford philosopher Nick Bostrom's and Milan Ćirković's edited volume Global Catastrophic Risks,[12] and "Complex Value Systems are Required to Realize Valuable Futures"[13] to the conference AGI-11. Yudkowsky is the author of the Singularity Institute publications "Creating Friendly AI"[14] (2001), "Levels of Organization in General Intelligence"[15] (2002), "Coherent Extrapolated Volition"[16] (2004), and "Timeless Decision Theory"[17] (2010).[18] Yudkowsky has also written several works[19] of science fiction and other fiction. His Harry Potter fan fiction story Harry Potter and the Methods of Rationality illustrates topics in cognitive science and rationality (The New Yorker described it as "a thousand-page online 'fanfic' text called 'Harry Potter and the Methods of Rationality', which recasts the original story in an attempt to explain Harry's wizardry through the scientific method"[20]), and has been favorably reviewed by authors David Brin[21][22][23] and Rachel Aaron,[24][25] Robin Hanson,[26] Aaron Swartz,[27] and by programmer Eric S. Raymond.[28] [edit]References
^ Autobiography ^ Miller, James (2012). Singularity Rising. Texas: BenBella Books. pp. 35-44. ISBN 1936661659. ^ "Singularity Institute for Artificial Intelligence: Team". Singularity Institute for Artificial Intelligence. Retrieved 2009-07-16. ^ Eliezer Yudkowsky: About ^ Kurzweil, Ray (2005). The Singularity Is Near. New York, US: Viking Penguin. p. 599. ISBN 0-670-03384-7. ^ Kurzweil, Ray (2005). The Singularity Is Near. New York, US: Viking Penguin. p. 420. ISBN 0-670-03384-7. ^ An Intuitive Explanation of Bayes' Theorem ^ "Normal Cryonics". Less Wrong. Retrieved 2012-08-31. ^ "Overcoming Bias: About". Robin Hanson. Retrieved 2012-02-01. ^ "Welcome to Less Wrong". Less Wrong. Retrieved 2012-02-01. ^ "Sequences-Lesswrongwiki". Retrieved 2012-02-01. ^ Bostrom, Nick; Ćirković, Milan M., eds. (2008). Global Catastrophic Risks. Oxford, UK: Oxford University Press. pp. 91–119, 308–345. ISBN 978-0-19-857050-9. ^ Yudkowsky, Eliezer (2011). "Complex Value Systems are Required to Realize Valuable Futures". AGI-11. ^ Yudkowsky, Eliezer. "Creating Friendly AI". Singularity Institute for Artificial Intelligence. Retrieved 2012-02-01. ^ Yudkowsky, Eliezer. "Levels of Organization in General Intelligence". Singularity Institute for Artificial Intelligence. Retrieved 2012-02-01. ^ Yudkowsky, Eliezer. "Coherent Extrapolated Volition". Singularity Institute for Artificial Intelligence. Retrieved 2012-02-01. ^ Yudkowsky, Eliezer. "Timeless Decision Theory". Singularity Institute for Artificial Intelligence. Retrieved 2012-02-01. ^ "Eliezer Yudkowsky Profile". Accelerating Future. ^ "Yudkowsky- Fiction". Eliezer Yudkowsky. ^ pg 54, "No Death, No Taxes: The libertarian futurism of a Silicon Valley billionaire" ^ David Brin (2010-06-21). "CONTRARY BRIN: A secret of college life... plus controversies and science!". Davidbrin.blogspot.com. Retrieved 2012-08-31. ^ "'Harry Potter' and the Key to Immortality", Daniel Snyder, The Atlantic ^ David Brin (2012-01-20). "CONTRARY BRIN: David Brin's List of "Greatest Science Fiction and Fantasy Tales"". Davidbrin.blogspot.com. Retrieved 2012-08-31. ^ Authors (2012-04-02). "Rachel Aaron interview (April 2012)". Fantasybookreview.co.uk. Retrieved 2012-08-31. ^ "Civilian Reader: An Interview with Rachel Aaron". Civilian-reader.blogspot.com. 2011-05-04. Retrieved 2012-08-31. ^ Hanson, Robin (2010-10-31). "Hyper-Rational Harry". Overcoming Bias. Retrieved 2012-08-31. ^ Swartz, Aaron. "The 2011 Review of Books (Aaron Swartz's Raw Thought)". Aaronsw.com. Retrieved 2012-08-31. ^ "Harry Potter and the Methods of Rationality". Esr.ibiblio.org. 2010-07-06. Retrieved 2012-08-31. [edit]Further reading
Our Molecular Future: How Nanotechnology, Robotics, Genetics and Artificial Intelligence Will Transform Our World by Douglas Mulhall, 2002, p. 321. The Spike: How Our Lives Are Being Transformed By Rapidly Advancing Technologies by Damien Broderick, 2001, pp. 236, 265-272, 289, 321, 324, 326, 337-339, 345, 353, 370. [edit]External links
Wikiquote has a collection of quotations related to: Eliezer Yudkowsky
Personal web site Less Wrong - "A community blog devoted to refining the art of human rationality" co-founded by Yudkowsky. Biography page at KurzweilAI.net Biography page at the Singularity Institute Downloadable papers and bibliography Predicting The Future :: Eliezer Yudkowsky, NYTA Keynote Address - Feb 2003 Harry Potter and the Methods of Rationality Harry Potter and the Methods of Rationality - The Podcast
Straight from Wikipedia.
I just had to stare at this a while. We can have papers published about this, we really ought to be able to get papers published about Friendly AI subproblems.
My favorite part is at the very end.
Trivialism is the theory that every proposition is true. A consequence of trivialism is that all statements, including all contradictions of the form "p and not p" (that something both 'is' and 'isn't' at the same time), are true.[1]
[edit]See also
[edit]References
[edit]Further reading