Posts

Sorted by New

Wiki Contributions

Comments

Yeah that sounds very plausible. It also seems plausible that we get regulation about transparency, and in all the cases where the benefit from interpretability has something to do with people interacting you get the results being released at least semi-publicly.  Industrial espionge also seems a worry.  The USSR was hugely successful in infultrating the Manhatten project and contined to successfully steal US tech throughout the cold war. 

 Also worth noting the more information about how good one's own model is also increases AI risk in the papers model, although they model it as a discrete shift from no information to full information so unclear well that model applies. 

I think there are quite a lot of worlds where understanding the black box better is bad.

If alignment is really really hard we should expect to fail in which case the more obvious it is that we've failed the better because the benefits of safety aren't fully externalised. This probably doesn't hold in worlds where we get from not very good AI to AGI very rapidly. 

Potentially counterintuitive things happen when information gets more public. In this paper https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

increasing information has weird non-linear effects on the amount spent on safety. One of the pieces of intuition behind that is that having more information about your competitors can cause you to either speed up or slow down depending on where they in fact are in relation to you. 

Also seems like risk preferences are important here. If people are risk averse then having less information about the expected outcomes of their models makes them less likely to deploy them all else equal. 

I think I'm most excited about 15, 16 and 6b because of a general worldview of 1) alignment is likely to be really hard and it seems like we'll need assistance from the best aligned systems to solve the problem and 2) that ~all the risk comes from RL agents. Getting really really good microscope AI looks really good from this perspective, and potentially we need a co-ordinated movement towards microscope AI and away from RL models in which case building a really compelling case for why AGI is dangerous looks really important. 

He convinced Keynes that Bayesianism was correct! 

I prefer hypocrisy to cruelty. 

More gennerally I think this just misses the point of drowning child.  The argument is not that you have this set of preferences and therefore you save the child, the argument is that luxury items are not of equal moral worth to the life of a child.  This can be made consistent with taking off your suit first if think the delay has a sufficiently small probability of leading to the death of child and you think the death of a child and the expensive suit are comparable. 

The claim that most EAs want to look like they're contributing to AI safety rather than having a deep commitment is just deeply at odds to my personal experience. The EAs I meet are in general the most committed people to solving problems that I've ever met. I might try to come up with a more systematic argument for this, but my gut is that that's crazy.  

This may be random or something, but in my experience there is a higher probability that rationality people aren't committed to solving problems and want to use rationality to improve their lives personally. But outside view is that this shouldn't be surprising given that the core of the rationality movement is trying have true beliefs. 

Great article, small (but maybe significant) nitpick. I don't think there's great evidence that innovation-by-bureaucracy is bad, and I actually think it's pretty good. The sheer quantity of innovation produced by the two world wars and space program is spectacular, as is the record of DARPA post WW2. Even the Soviet Union could do could innovation by bureaucracy. At the company level, it's true that lots of big innovations come from small new companies, but at the same time Bell Labs, Intel, Sony were monsters. I actually think that the retreat of government is a big part of problem. 

Ok, so we have different epistemic systems, I obviously put much higher value academic social science ,where I think the methodology is good, than you do. 

It's a plausible claim that people with lower impulse control less enjoyment of experience, it's not one that I've seen made before, my intuition says the effect size is probably small. Fundamentally, show me the evidence, although presumably it can't be from academic social science.  

Finally, it's important to distinguish between empirical and moral claims. So, I think prisons are bad but in principle can be morally justified. I think mafia bosses should go to prison because I'm not a patient centered deontologist who rejects the the legitimacy of the state. Similarly, I think there are lots of plausible moral systems in which restricting children's sovereignty can be fine or even morally required. 

On the empirical matter, I don't know we just disagree. But I'm 19, I was in school until I was 18, it was fine. My brother's 17, I asked him, he said schools fine. I went to a normal school, he went to a normal school. My girlfriends also 19, she liked school. Maybe we're all massive outliers, but when you ask people who've been to prison what they think of prison they say it's terrible and when you ask people who've been to school what they think of school they mostly say it's fine. 

I think it's a good insight and one I basically agree with that children, before they're 16 or so in the UK at least, are treated in a way which robs them of their sovereignty and all else equal this is bad for both deontic and consequentialist reasons. I basically agree that the vast majority of content taught to 11-16 year neither increases their productivity in the workforce, nor do they enjoy studying it, nor does it make them better people. It is also true that when people are homeschooled or unschooled they do fine (I've read like 4 papers on outcomes of homeschooling and unschooling so I won't make a stronger empirical claim than 'fine'), and that biggest difficulty that certainly unschoolers have is others reactions. 

However, I think you're dramatically overstating your case. I think schooling for 3-10 year olds is incredibly valuable. There's excellent empirical evidence that early years education is good for a whole range outcomes and is fantastic for the children of low income parents. In general both the actual content of the curriculum, i.e literacy and numeracy are very useful - functionally illiterate people have difficult lives. I there's also the effect of general improvement in cognitive capacity and teaching the ability to make abstract, logical deductions. We see the effect of this when we ask individuals in communities in which there is no primary schooling to think abstractly. The most of these studies is on Russian peasants in 1907 but this has been replicated for instance with modern hunter-gatherer tribes. I also think that the socialisation aspect is important as well as the teaching of non-cognitive skills - in Pinker's history of violence he provides evidence of the impact of schooling in reducing violence by improving impulse control, based on impulse control being a transferable, trainable characteristic. 

I also just reject your claim that schooling is that unpleasant. Often it's the place where kids spend the most time interacting with their friends, it gives people achievable goals which they get clear, regular feedback on and that's pretty motiving in general and people who are bored often just fuck about at the back of the class which isn't ideal but also doesn't seem that bad. Just in general lots of the time in school is mostly made up of chatting to people you either don't mind, quite like, or forge some of the deepest and most valuable friendships you ever have with. 

Potentially a way in which this heuristic makes sense in real world is if the utility of 0$ was negative. If I were an banana seller then if I sold nothing, when I got my next shipment of bananas I wouldn't have enough space in my warehouse and I'd have to throw out some bananas. In this case, I have to take out an insurance policy against 0$ in all cases except certainty. This will hold even if insurance costs are proportional to probability of 0$ if there were fixed transaction costs to buying insurance. 

Load More