Obviously I meant some kind of approximation of consensus or acceptability derived from much greater substantiation. There is no equivalent to Climate Change or ZFC in the field of AI in terms of acceptability and standardisation. Matthew Barnett made my point better in the above comments.
Yes, most policy has no degree of consensus. Most policy is also not asking to shut down the entire world's major industries. So there must be a high bar. A lot of policy incidentally ends up being malformed and hurting people, so it sounds like you're just making the case for more "consensus" and not less.
The bar is very low for me: If MIRI wants to demand the entire world shut down an entire industry, they must be an active research institution actively producing agreeable papers.
AI is not particularly unique even relative to most technologies. Our work on chemistry in the 1600's-1900's far outpaced our level of true understanding of chemistry, to the point where we only had a good model of an atom in the 20th century. And I don't think anyone will deny the potential dangers of chemistry. Other technologies followed a similar trajectory.
We don't have to ag...
I am not convinced MIRI has given enough evidence to support the idea that unregulated AI will kill everyone and their children. Most of their projects are either secret or old papers. The only papers which have been produced after 2019 are random irrelevant math papers. Most of the rest of their papers are not even technical in nature and contain a lot of unverified claims. They have not even produced one paper since the breakthrough in LLM technology in 2022. Even among the papers which do indicate risk, there is no consensus among scientific peers...
just some actual consensus among established researchers to sift mathematical facts from conjecture.
"Scientific consensus" is a much much higher bar than peer review. Almost no topic of relevance has a scientific consensus (for example, there exists basically no trustworthy scientific for urban planning decisions, or the effects of minimum wage law, or pandemic prevention strategies, or cyber security risks, or intelligence enhancement). Many scientific peers think there is an extinction risk.
I think demanding scientific consensus is an unreasonably high bar that would approximately never be met in almost any policy discussion.
I am not convinced MIRI has given enough evidence to support the idea that unregulated AI will kill everyone and their children.
The way you're expressing this feels like an unnecessarily strong bar.
I think advocacy for an AI pause already seems pretty sensible to me if we accept the following premises:
A question for all: If you are wrong and in 4/13/40 years most of this fails to come true, will you blame it on your own models being wrong or shift goalposts towards the success of the AI safety movement / government crack downs on AI development? If the latter, how will you be able to prove that AGI definitely would have come had the government and industry not slowed down development?
To add more substance to this comment: I felt Ege came out looking the most salient here. In general, making predictions about the future should be backed by heavy un...
Thank you for raising this explicitly. I think probably lots of people's timelines are based partially on vibes-to-do-with-what-positions-sound-humble/cautious, and this isn't totally unreasonable so deserves serious explicit consideration.
I think it'll be pretty obvious whether my models were wrong or whether the government cracked down. E.g. how much compute is spent on the largest training run in 2030? If it's only on the same OOM as it is today, then it must have been government crackdown. If instead it's several OOMs more, and moreover the train...
If your goal is to get to your house, there is only one thing that will satisfy the goal: being at your house. There is a limited set of optimal solutions that will get you there. If your goal is to move as far away from your house as possible, there are infinite ways to satisfy the goal and many more solutions at your disposal.
Natural selection is a "move away" strategy, it only seeks to avoid death, not go towards anything in particular, making the possible class of problems it can solve much more open ended. Gradient Descent is a "move towards" strategy...
Gradient descent by default would just like do, not quite the same thing, it's going to do a weirder thing, because natural selection has a much narrower information bottleneck. In one sense, you could say that natural selection was at an advantage, because it finds simpler solutions.
This is silly because it's actually the exact opposite. Gradient descent is incredibly narrow. Natural selection is the polar opposite of that kind of optimisation: an organism or even computer can come up with a complex solution to any and every problem given enough time to e...
If AI behaves identically to me but our internals are different, does that mean I can learn everything about myself from studying it? If so, the input->output pipeline is the only thing that matters, and we can disregard internal mechanisms. Black boxes are all you need to learn everything about the universe, and observing how the output changes for every input is enough to replicate the functions and behaviours of any object in the world. Does this sound correct? If not, then clearly it is important to point out that the algorithm is doing Y and not X.
AIs that are superhuman at just about any task we can (or simply bother to) define a benchmark, for
This is just a false claim. Seriously, where is the evidence for this? We have AIs that are superhuman at any task we can define a benchmark for? That's not even true in the digital world let alone in the world of mechatronic AIs. Once again i will be saving this post and coming back to it in 5 years to point out that we are not all dead. This is getting ridiculous at this point.
If the Author believes what they've written then they clearly think that it would be more dangerous to ignore this than to be wrong about it, so I can't really argue that they shouldn't be person number 1. It's a comfortable moral position you can force yourself into though. "If I'm wrong, at least we avoided total annihilation, so in a way I still feel good about myself".
I see this particular kind of prediction as a kind of ethical posturing and can't in good conscience let people make them without some kind of accountability. People have been paid ...
I have saved this post on the internet archive[1].
If in 5-15 years, the prediction does not come true, i would like it to be saved as evidence of one of the many serious claims that world-ending AI will be with us in very short timelines. I think the author has given more than enough detail on what they mean by AGI, and has given more than enough detail on what it might look like, so it should be obvious whether or not the prediction comes true. In other words, no rationalising past this or taking it back. If this is what the author truly believes, t...
There are three kinds of people. Those who in the past made predictions which turned out to be false, those who didn't make predictions, and those who in the past made predictions which turned out to be true. Obviously the third kind is the best & should be trusted the most. But what about the first and second kinds?
I get the impression from your comment that you think the second kind is better than the first kind; that the first kind should be avoided and the second kind taken seriously (provided they are making plausible arguments etc.) If so, I disa...
How do you suppose the AGI is going to be able to wrap the sun in a dyson sphere using only the resources available on earth? Do you have evidence that there are enough resources on asteroids or nearby planets for their mining to be economically viable? At the current rate, mining an asteroid costs billions while their value is nothing. Even then we don't know if they'll have enough of the exact kind of materials necessary to make a dyson sphere around an object which has 12000x the surface area of earth. You could have von nuemman replicators do the minin... (read more)