What do they have against AI? Seems like the impact on regular people has been pretty minimal. Also, if GPT4 level technology ws allowed to fully mature and diffuse to a wide audience without increasing in base capability, it seems like the impact on everyone would be hugely beneficial
In an impure sample you would see high residual resistance below Tc
Don't the authors claim to have measured 0 resistivity (modulo measurement noise)?
In the MIRI dialogues from 2021/2022 I thought you said you would update to 40% of AGI by 2040 if AI got an IMO gold medal by 2025? Did I misunderstand or have you shifted your thinking (if so, how?)
What do you think are the strongest arguments in that list, and why are they weaker than a vague "oh maybe we'll figure it out"?
It seems like something has to be going wrong if the model output has higher odds that TAI is already here (~12%) than TAI being developed between now and 2027 (~11%)? Relatedly, I'm confused by the disclaimer that "we are not updating on the fact that TAI has obviously not yet arrived" -- shouldn't that fact be baked into the distributions for each parameter (particularly the number of FLOPs to reach TAI)?
Well....Eliezer does think we're doomed so doesn't necessarily contradict his worldview
Minor curiosity: What was the context behind Asimov predicting in 1990 that permanent space cities would be built within 10 years? It seems like a much wilder leap than any of his other predictions.
Would be very curious to hear thoughts from the people that voted "disagree" on this post
Maybe you could measure how effectively people pass e.g. a multiple choice version of an Intellectual Turing Test (on how well they can emulate the viewpoint of people concerned by AI safety) after hearing the proposed explanations.
[Edit: To be explicit, this would help further John's goals (as I understand them) because it ideally tests whether the AI safety viewpoint is being communicated in such a way that people can understand and operate the underlying mental models. This is better than testing how persuasive the arguments are because it's a) more in line with general principles of epistemic virtue and b) is more likely to persuade people iff the specific mental models underlying AI safety concern are correct.
One potential issue would be people bouncing off the arguments early and never getting around to building their own mental models, so maybe you could test for succinct/high-level arguments that successfully persuade target audiences to take a deeper dive into the specifics? That seems like a much less concerning persuasion target to optimize, since the worst case is people being wrongly persuaded to "waste" time thinking about the same stuff the LW community has been spending a ton of time thinking about for the last ~20 years]
If robbers had a lot of cultural cachet and there were widely-disseminated arguments implying that robbers need to rob people, I think there would be a lot of value in a piece narrowly arguing that robbers don't need to rob people, regardless of your views on their thought processes.