To translate Graham's statement back to the FAI problem: In Eliezer's alignment talk, he discusses the value of solving a relaxed constraint version of the FAI problem by granting oneself unlimited computing power. Well, in the same way, the AGI problem can be seen as a relaxed constraint version of the FAI problem. One could argue that it's a waste of time to try to make a secure version of AGI Approach X if we don't even know if it's possible to build an AGI using AGI Approach X. (I don't agree with this view, but I don't think it's entirely unreasonable.)
Isn't the point exactly that if you can't solve the whole problem of (AGI + Alignment) then it would be better not even to try solving the relaxed problem (AGI)?
Doesn't this basically move the reference class tennis to the meta-level?
"Oh, in general I'm terrible at planning, but not in cases involving X, Y and Z!"
It seems reasonable that this is harder to do this on a meta-level, but do any of the other points you mention actually "solve" this problem?
Why do you expect prediction markets to be more useful for this than evidence-based methods which take into account interactions between the practitioner's characteristics and whatever method they are using?
Is your point that there should be no formal policy in the first place?
Once you do have such a formal policy, the best of judgment doesn't necessarily help you if you can't circumvent the constraints set by these policies.
It seems to me that to a large extent impressions can be framed in terms of vague predictive statements with no explicit probabilistic or causal content, influenced much more by external factors than reasoned beliefs.
"He seemed nice" corresponds to "X% chance that if I met him more often, we would continue getting along well".
"That sounds crazy" corresponds to "I can't really tell you why but I find that rather improbable".
If I am right about this, the first and main step would be lto earn how to turn impressions into explicit probabilistic statements which are easy to test. Keeping track of their status should not be any different from anything else then.
But on the other hand, consequentialism is particularly prone to value misalignment. In order to systematize human preferences or human happiness, it requires a metric; in introducing a metric, it risks optimizing the metric itself over the actual preferences and happiness.
Yes, in consequentialism you try to figure out what values you should have, and your attempts at doing better might lead you down the Moral Landscape rather than up toward a local maximum.
But what are the alternatives? In deontology you try to follow a bunch of rules in the hope that they will keep you where you are on the landscape, trying to halt progress. Is this really preferable?
So it seems important to have an ability to step back and ask, "am I morally insane?", commensurate with one's degree of confidence in the metric and method of consequentialism.
It seems to me that any moral agent should have this ability.
Admittedly, I do not have much of an idea about Infinite Ethics, but it appeared to me that the problem was to a large extent about how to deal with an infinite number of agents on which you can define no measure/order so that you can discount utilities.
Right now, I don't see how this approach helps with that?