You can peek into everyone's heads, gather all the evidence, remove double-counting, and perform a joint update. That's basically what Aumann agreement does - it doesn't vote on beliefs, but instead tries to reach an end state that's updated on all the evidence behind these beliefs.
Right, this is where strong Bayesianism is required. You have to assume, for example, that everyone agrees on the set of hypotheses under consideration and the exact models to be used. This is not just an abstract plan for slicing the universe into manageable...
Aumann agreement isn't an answer here, unless you assume strong Bayesianism, which I would advise against.
I have to say I don't know why a linear combination of utility functions could be considered ideal. There are some pretty classic arguments against it, such as Rawls' maximin principle, and more consequentialist arguments against allowing inequality in practice.
If you liked this post, you will love Amartya Sen's Collective Choice and Social Welfare. Originally written in 1970 and expanded in 2017, this is a thorough development of the many paradoxes in collective choice algorithms (voting schemes, ways to aggregate individual utility, and so on.)
My sense is the AI alignment community has not taken these sorts of results seriously. Preference aggregation is non-trivial, so "aligning" an AI to individual preferences means something much different than "aligning" an AI to societal preference...
So I was very surprised when I learned that a single general method in deep learning (training an artificial neural network on massive amounts of data using gradient descent)[2] led to performance comparable or superior to humans’ in tasks as disparate as image classification, speech synthesis, and playing Go. I found superhuman Go performance particularly surprising—intuitive judgments of Go boards encode distillations of high-level strategic reasoning, and are highly sensitive to small changes in input.
I think it may be important to recogni...
And MuZero, which beats AlphaZero and which does not use symbolic search over a simulator of board states but internal search over hidden state and value estimates?
Neural networks, on the other hand, are famously bad at symbolic reasoning tasks, which may ultimately have some basis in the fact that probability does not extend logic.
Considering all the progress on graph and relational networks and inference and theorem-proving and whatnot, this statement is giving a lot of hostages to fortune.
We could look at donors' public materials, for example evaluation requirements listed in grant applications. We could examine the programs of conferences or workshops on philanthropy and see how often this topic is discussed. We could investigate the reports and research literature on this topic. But I don't know how to define enough concern.
While Bayesian statistics are obviously a useful method, I am dissatisfied with the way "Bayesianism" has become a stand-in for rationality in certain communities. There are well-developed, deep objections to this. Some of my favorite references on this topic:
I am happy that you mention Gelman's book (I am studying it right now). I think lots of "naive strong bayesianists" would improve from a thoughtful study of the BDA book (there are lots of worked out demos and exercises available for it) and maybe some practical application of Bayesian modelling to some real-world statistical problems. The practice of "Bayesian way of life" of "updating my priors" sounds always a bit too easy in contrast to doing a genuine statistical inference.
For example, a couple of puzzles I am still ...
My sense is that donors do care about evaluation, on the whole. It's not just GiveWell / Open Philanthropy / EA who think about this :P
See for example https://www.rockpa.org/guide/assessing-impact/
Well said. And this middle ground is exactly what I am worried about losing as companies add more AI to their operations -- human managers can and do make many subtle choices that trade profit against other values, but naive algorithmic profit maximization will not. This is why my research is on metrics that may help align commercial AI to pro-social outcomes.
Because central planning is so out of fashion, we have mostly forgotten how to do it well. Yet there are little known historical methods that could be applicable in the current crisis, such as input-output analysis, as Steve Keen writes:
One key tool that has fallen out of use in economics is input-output analysis. First developed by the non-orthodox economist Wassily Leontief (Leontief 1949; Leontief 1974), it used matrix mathematics to quantify the dependence of the production of one commodity on inputs of other commodities. Given its superficial similari...
I agree that revenue is a key part of the organizational feedback loop that non-profits do not have, and it's often a problem. However, for-profits have a tendency to turn toward revenue. To the extent that we care about what an organization does for society, we should care about organizational drift caused by chasing revenue. I believe it's an open question whether lack of revenue feedback in non-profits or organizational drift cause by revenue alignment in for-profits is currently a bigger problem in society.
I also think you may be underestimat...
Hi Gordon. Thanks for reading the post. I agree completely that the right metrics are nowhere near sufficient for aligned AI — further I’d say that “right” and “aligned” have very complex meanings here.
What I am trying to do with this post is shed some light on one key piece of the puzzle, the actual practice of incorporating metrics into real systems. I believe this is necessary, but don’t mean to suggest that this is sufficient or unproblematic. As I wrote in the post, “this sort...
a) "Everyone does Bayesian updating according to the same hypothesis set, model, and measurement methods" strikes me as an extremely strong assumption, especially since we do not have strong theory that tells us the "right" way to select these hypothesis sets, models, an... (read more)