Got it. Thank you!
What happens if it only considers the action if it both failed to find "PA+A()=x" inconsistent and found a proof that PA+A()=x proves U()=x? Do an inconsistency check first and only consider/compare the action if the inconsistency check fails.
I had an idea, and was wondering what its fatal flaw was. For UDT, what happens if, instead of proving theorems of the form "actionx --> utilityx" , it proves theorems of the form "PA+actionx |- utilityx"?
At a first glance, this seems to remove the problem of spurious counterfactuals implying any utility value, but there's probably something big I'm missing.
Completed.
This is actually isomorphic to the absent-minded driver problem. If you precommit to going straight, there is a 50/50 chance of being at either one of the two indistinguishable points on the road. If you precommit to turning left, there is a nearly 100% chance of being at the first point on the road (Since you wouldn't continue on to the second road point with that strategy.) It seems like probability can be determined only after a strategy has been locked into place.
Question for AI people in the crowd: To implement Bayes' Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?
Also, we talk about world-models a lot here, but what exactly IS a world-model?
I'd call it a net positive. Along the axis of "Accept all interviews, wind up in some spectacularly abysmal pieces of journalism" and "Only allow journalism that you've viewed and edited", the quantity vs quality tradeoff, I suspect the best place to be would be the one where the writers who know what they're going to say in advance are filtered, and where the ones who make an actual effort to understand and summarize your position (even if somewhat incompetent) are engaged.
I don't think the saying "any publicity is good publicity" is true, but "shoddy publicity pointing in the right direction" might be.
I wonder how feasible it is to figure out journalist quality by reading past articles... Maybe ask people who have been interviewed by the person in the past how it went?
I think there's an important distinction to be made between the different levels of earning to give. Really, there's a spectrum between "donate 5 percent of income" at one end, and "devote existence to resolving issue" at the other end. For humans trying to do the best they can, in fact, trying to scale up too fast can lead to severe burnout. So caring for yourself and having a good life and low stress is a good idea because it guards against burnout. It is better to donate a thousand dollars a month to resolve an issue than three thousand with an 80% chance of burnout. Slowly build up to higher points on the spectrum that don't give up quality of life.
Remember, the goal is to do that which works, not to win a "I'm way more hardcore about charity than you!" contest. If that which works leads to sacrifice and you can handle it without burnout risk, then sacrifice. If self-sacrifice doesn't work for solving the issue, then don't do it. And yes, aligning oneself with the people working on it and supplying them with resources is pretty much exactly what is required in many cases. Earning to give comes from the fact that the "supplying them with resources" step works much better with more resources, and working at high paying jobs is a good way to get resources.
And finally, about not understanding why someone would completely change their lifestyle to help as many as people as possible... Lifestyle changes tend to look really intimidating from the outside, not from the inside. In college, as an example, going "I'm taking >20 credits" makes people mightily impressed and worried about your inevitable lack of a social life, but once you actually start doing it, it doesn't feel extraordinary or hard from the inside. Dropping annual expenses from 60k to 15k is another thing that sounds intimidating, but from the inside, it isn't that difficult, and quality of life doesn't significantly change.
So that's one part of it, that it doesn't take as much of a sacrifice as you think. The second part of it is that if there is anything at all that you value more than the thing you would spend the money on instead, moving the money to the more highly valued thing is inevitable if you don't compartmentalize. I value ten lives more highly than purchasing a shiny new car, and I suspect that most people would agree with this. It's just a matter of acting on preexisting values and desires.
The reason to make lots of money to give it away is elaborated on here, in the paragraph about the lawyer who wants to clean up the beach.
Summary version: More charities are funding-limited than volunteer-limited, and if you are making a sufficient amount of money, working one extra hour and donating the proceeds from that hour gets more done, saves more people, than using that hour to volunteer. The important part is to actually save people.
Saving people is far more important than giving consistently (If the best way to save people is to give each month, I want to give each month, if the best way to save people is to donate large chunks infrequently, I want to donate large chunks infrequently), saving people is far more important than having a good attitude towards giving (If having a good attitude towards giving makes me donate more, I want to have a good attitude towards giving, if having a selfish attitude towards giving makes me donate more, I want to have a selfish attitude), and saving people is far more important than spiritually developing in the process (I trust you can complete the pattern). I'm not saying these things are bad, it's just that they are subgoals of the thing you are trying to accomplish, which is doing the most good. Making a great deal to give it away, and making sure you don't backslide into selfishness are things to do to ensure that the most people can be saved. Regular giving is secondary in importance.
The goal is not fitting conventional patterns of giving, the goal is to help as many people as possible. To try to get a high score in the LIVES IMPROVED statistics column of the game of life. If something helps in this quest, do it, if it doesn't help, stop doing it.
"But the general result is that one can start with an AI with utility/probability estimate pair (u,P) and map it to an AI with pair (u',P) which behaves similarly to (u,P')"
Is this at all related to the Loudness metric mentioned in this paper? https://intelligence.org/files/LoudnessPriors.pdf It seems like the two are related... (in terms of probability and utility blending together into a generalized "importance" or "loudness" parameter)