Wow, that's all kinds of crazy. I'm not sure how much as I'm not a mathematical physicist - MWI and quantum mechanics implied by Newton? Really? - but one big flag for me is pg187-188 where he doggedly insists that the universe is closed, although as far as I know the current cosmological consensus is the opposite, and I trust them a heck of a lot more than a fellow who tries to prove his Christianity with his physics.
(This is actually convenient for me: a few weeks ago I was wondering on IRC what the current status of Tipler's theories were, given that he had clearly stated they were valid only if the universe were closed and if the Higgs boson was within certain values, IIRC, but I was feeling too lazy to look it all up.)
And the extraction of a transcendent system of ethics from a Feynman quote...
A moment’s thought will convince the reader that Feynman has described not only the process of science, but the process of rationality itself. Notice that the bold-faced words are all moral imperatives. Science, in other words, is fundamentally based on ethics. More generally, rational thought itself is based on ethics. It is based on a particular ethical system. A true human level intelligence program will thus of necessity have to incorporate this particular ethical system. Our human brains do, whether we like to acknowledge it or not, and whether we want to make use of this ethical system in all circumstances. When we do not make use of this system of ethics, we generate cargo cult science rather than science.
This is just too wrong for words. This is like saying that looking both ways before crossing the street is obviously a part of rational street-crossing - a moment's thought will convince the reader (Dark Arts) - and so we can collapse Hume's fork and promote looking both ways to a universal meta-ethical principal that future AIs will obey!
An AI program must incorporate this morality, otherwise it would not be an AI at all.
Show me this morality in the AIXI equation or GTFO!
After all, what is a computer program but a series of imperative sentences?
A map from range to domain, a proof in propositional logic, or a series of lambda equations and reductions all come to mind...
In fact, I claim that an ethical system that encompasses all human actions, and more generally, all actions of any set of rational beings (in particular, artificial intelligences) can be deduced from the Feynman axioms. In particular, note that destroying other rational beings would make impossible the honestly Feynman requires.
One man's modus ponens is another man's modus tollens. That the 'honestly' requires other entities is proof that this cannot be an ethical system which encompasses all rational beings.
Hence, they will be part of the community of intelligent beings deciding whether to resurrect us or not. Do not children try to see to their parents’ health and well-being? Do they not try and see their parent survive (if it doesn’t cost too much, and it the far future, it won’t)? They do, and they will, both in the future, and in the far future.
Any argument that rests on a series of rhetorical questions is untrustworthy. Specifically, sure, I can in 5 seconds come up with a reason they would not preserve us: there are X mind-states we can be in while still maintaining identity or continuity; there are Y (Y < X) that we would like or would value; with infinite computing power, we will exhaust all Y. At that point, by definition, we could choose to not be preserved. Hence, I have proven we will inevitably choose to die even if uploaded to Tipler's Singularity.
(Correct and true? Dunno. But let's say this shows Tipler is massively overreaching...)
What a terrible paper altogether. This was a peer-reviewed journal, right?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Try to convert your non-rationalist friends.
I don't think that's a good idea, to be honest. Conversion of other individuals is one of the more difficult things you can do as an aspiring rationalist. Let's face it, a lot of irrational arguments have very very strong intuitive appeal. Unless you are very familiar with the standard arguments for rationalism, you're more likely to simply alienate those around you and further isolate yourself by attempting to convert your non-rationalist friends.