While merely anti-bacterial, Nano Silver Fluoride looks promising. (Metallic silver applied to teeth once a year to prevent cavities).
Yudkowsky has written about The Ultimatum Game. It has been referenced here 1 2 as well.
When somebody offers you a 7:5 split, instead of the 6:6 split that would be fair, you should accept their offer with slightly less than 6/7 probability. Their expected value from offering you 7:5, in this case, is 7 * slightly less than 6/7, or slightly less than 6.
Maybe add posts in /tag/ai-evaluations
to /robots.txt
Sure, but it does not preclude it. Moreover, if the costs of the actions are not borne by the altruist (e.g. by defrauding customers, or extortion), I would not consider it altruism.
In this sense, altruism is a categorization tag placed on actions.
I do see how you might add a second, deontological definition ('a belief system held by altruists'), but I wouldn't. From the post, "Humane" or "Inner Goodness" seem more apt in exploring these ideas.
I do not see the contradiction. Could you elaborate?
Broadly, he predicts AGI to be animalistic ("learning disabled toddler"), rather than a consequentialist laser beam, or simulator.
This concept is introduced in Book 1 as the solution to the Ultimatum Game, and describes fairness as Shapely value.
When somebody offers you a 7:5 split, instead of the 6:6 split that would be fair, you should accept their offer with slightly less than 6/7 probability. Their expected value from offering you 7:5, in this case, is 7 * slightly less than 6/7, or slightly less than 6.
_
Once you've arrived at a notion of a 'fair price' in some one-time trading situation where the seller sets a price and the buyer decides whether to accept, the seller doesn't have an incentive to say the fair price is higher than that; the buyer will accept with a lower probability that cancels out some of the seller's expected gains from trade. [1]
Superintelligence FAQ [1] as well.