I think there are quite a lot of worlds where understanding the black box better is bad.
If alignment is really really hard we should expect to fail in which case the more obvious it is that we've failed the better because the benefits of safety aren't fully externalised. This probably doesn't hold in worlds where we get from not very good AI to AGI very rapidly.
Potentially counterintuitive things happen when information gets more public. In this paper https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligen...
He convinced Keynes that Bayesianism was correct!
I prefer hypocrisy to cruelty.
More gennerally I think this just misses the point of drowning child. The argument is not that you have this set of preferences and therefore you save the child, the argument is that luxury items are not of equal moral worth to the life of a child. This can be made consistent with taking off your suit first if think the delay has a sufficiently small probability of leading to the death of child and you think the death of a child and the expensive suit are comparable.
The claim that most EAs want to look like they're contributing to AI safety rather than having a deep commitment is just deeply at odds to my personal experience. The EAs I meet are in general the most committed people to solving problems that I've ever met. I might try to come up with a more systematic argument for this, but my gut is that that's crazy.
This may be random or something, but in my experience there is a higher probability that rationality people aren't committed to solving problems and want to use rationality to improve their lives personally. But outside view is that this shouldn't be surprising given that the core of the rationality movement is trying have true beliefs.
Great article, small (but maybe significant) nitpick. I don't think there's great evidence that innovation-by-bureaucracy is bad, and I actually think it's pretty good. The sheer quantity of innovation produced by the two world wars and space program is spectacular, as is the record of DARPA post WW2. Even the Soviet Union could do could innovation by bureaucracy. At the company level, it's true that lots of big innovations come from small new companies, but at the same time Bell Labs, Intel, Sony were monsters. I actually think that the retreat of government is a big part of problem.
Ok, so we have different epistemic systems, I obviously put much higher value academic social science ,where I think the methodology is good, than you do.
It's a plausible claim that people with lower impulse control less enjoyment of experience, it's not one that I've seen made before, my intuition says the effect size is probably small. Fundamentally, show me the evidence, although presumably it can't be from academic social science.
Finally, it's important to distinguish between empirical and moral claims. So, I think prisons are bad but in p...
I think it's a good insight and one I basically agree with that children, before they're 16 or so in the UK at least, are treated in a way which robs them of their sovereignty and all else equal this is bad for both deontic and consequentialist reasons. I basically agree that the vast majority of content taught to 11-16 year neither increases their productivity in the workforce, nor do they enjoy studying it, nor does it make them better people. It is also true that when people are homeschooled or unschooled they do fine (I've read like 4 papers on outcome...
Potentially a way in which this heuristic makes sense in real world is if the utility of 0$ was negative. If I were an banana seller then if I sold nothing, when I got my next shipment of bananas I wouldn't have enough space in my warehouse and I'd have to throw out some bananas. In this case, I have to take out an insurance policy against 0$ in all cases except certainty. This will hold even if insurance costs are proportional to probability of 0$ if there were fixed transaction costs to buying insurance.
The sufficiency stagnation point is a good one, especially given that is suggests that the people becoming religious on the margin are likely to be the best individuals of the population not currently committed to strong social institutions, to start better ones than religions.
Potentially a crux is that the ideas that really broad social institutions can be based around may mostly be based around certain types of really strong emotions like tribalism and faith, the crux being if 'mostly' means 90%, 99% or 99.999%.
I think it's unlikely that there'd be a crowding out effect currently on the margin (although I expect you would as some point if you're attracting progressively less sociable people), as you say because it builds know how, but also because it builds social capital and maybe breaks the negative feedback loop of loneliness.
My second claim is that religion is much much better as a community organising force than any other institution other than unions. I think this is because it can attract a very high percentage of a population, it persists through gennerations, and there aren't the same types of barriers you get with groups organised around a specific interest, and they don't skew middle class (often at least.)
I think more people being religious is good on the margin and basically don't think that religion is a signifiant barrier to spread and advance of good ideas and practices.
I think your mistake is your crux not including the broader social benefits that religion brings. Religion is maybe the only, or at least one of very few, forces that can bring large numbers of different people from within a community together on a regular basis and, in Europe and probably the US this has enormous benefits. What research on happiness and well-being tell us is that,...
Yeah that sounds very plausible. It also seems plausible that we get regulation about transparency, and in all the cases where the benefit from interpretability has something to do with people interacting you get the results being released at least semi-publicly. Industrial espionge also seems a worry. The USSR was hugely successful in infultrating the Manhatten project and contined to successfully steal US tech throughout the cold war.
Also worth noting the more information about how good one's own model is also increases AI risk in the papers model, although they model it as a discrete shift from no information to full information so unclear well that model applies.