How did the date 2035 get attached to the Forethought prediction? They are predicting a century of progress in a decade once we hit transformative AI as far as I can tell, not a century of progress in the decade from the date of prediction.
Empires are more like the opposite of nationalism than an example of it, even if the metropoles of empires tends to be nationalist. Nationalism is about the view that particular "people's", defined ethnically or just be citizenship should be sovereign and proud of it, empire is about the idea that one country can rule over many people's. This is kind of a nitpick, as having stable coherent national identity maybe did help industiral rev start in Britain, I don't know this history well enough to say. But in any case, the British Empire was hardly obviously net positive, it did huge damage to India in the 18th century for example (amongt many awful human rights abuses), when India was very developed by 18th century standards. And it's not clear it was necessary for the industrial revolution to happen. Raw materials could have been bought rather than stolen for example, and Smith thought slavery was less efficient than free labour.
(Cross-posted from EA Forum): I think you could have strengthened your argument here further by talking about how even in Dario's op-ed opposing the ban on state-level regulation of AI, he specifically says that regulation should be "narrowly focused on transparency and not overly prescriptive or burdensome". That seems to indicate opposition to virtually any regulations that would actually directly require doing anything at all to make models themselves safer. It's demanding that regulations be more minimal than even the watered-down version of SB 1047 that Anthropic publicly claimed to support.
Cross-posted from the EA forum, and sorry if anyone has already mentioned this, BUT:
Is the point when models hit a length of time on the x-axis of the graph meant to represent the point where models can do all tasks of that length that a normal knowledge worker could perform on a computer? The vast majority of knowledge worker tasks of that length? At least one task of that length? Some particular important subset of tasks of that length?
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options and the standing critiques; to help policy people know who to talk to for the actual information; and... (read 12185 more words →)
One way to understand this is that Dario was simply lying when he said he thinks AGI is close and carries non-negligible X-risk, and that he actually thinks we don't need regulation yet because it is either far away or the risk is negligible. There have always been people who have claimed that labs simply hype X-risk concerns as a weird kind of marketing strategy. I am somewhat dubious of this claim, but Anthropic's behaviour here would be well-explained by it being true.
People will sometimes invest if they think the expected return is high, even if they also think there is a non-trivial chance that the investment will go to zero. During the FTX collapse many people claimed that this is a common attitude amongst venture capitalists, although maybe Google and Amazon are more risk averse?
It's pretty telling that you think there's no chance that anyone who doesn't like your arguments is acting in good faith. I say that as someone who actually agrees that we should (probably, pop. ethics is hard!) reject total utilitarianism on the grounds that bringing someone into existence is just obviously less important than preventing a death, and that this means that longtermist are calling for important resource to be misallocated. (That is true of any false view about how EA resources should be spent though!). But I find your general tone of 'people have reasons to be biased against me so therefore nobody can possibly disagree with me in good faith or non-fanatically' extraordinarily off-putting, and think it's most likely effect is to cause a backfire where people in the middle move towards the simple total utilitarian view.
How did the date 2035 get attached to the Forethought prediction? They are predicting a century of progress in a decade once we hit transformative AI as far as I can tell, not a century of progress in the decade from the date of prediction.