I haven't thought deeply about this specific case, but I think you should consider this like any other ablation study--like, what happens if you replace the SAE with a linear probe?
And then a lot of the post seems to make really quite bad arguments against forecasting AI timelines and other technologies, doing so with... I really don't know, a rejection of bayesianism? A random invocation of an asymmetric burden of proof?
I think the position Ben (the author) has on timelines is really not that different from Eliezer's; consider pieces like this one, which is not just about the perils of biological anchors.
I think the piece spends less time than I would like on what to do in a position of uncertainty--like, if the core problem is that we are approaching a cliff of uncertain distance, how should we proceed?--but I think it's not particularly asymmetric.
[And--there's something I like about realism in plans? If people are putting heroic efforts into a plan that Will Not Work, I am on the side of the person on the sidelines trying to save them their effort, or direct them towards a plan that has a chance of working. If the core uncertainty is whether or not we can get human intelligence advancement in 25 years--I'm on your side of thinking it's plausible--then it seems worth diverting what attention we can from other things towards making that happen, and being loud about doing that.]
Instead, the U.S. government will do what it has done every time it’s been convinced of the importance of a powerful new technology in the past hundred years: it will drive research and development for military purposes.
I think this is my biggest disagreement with the piece. I think this is the belief I most wish 10-years-ago-us didn't have, so that we would try something else, which might have worked better than what we got.
Or--in shopping the message around to Silicon Valley types, thinking more about the ways that Silicon Valley is the child of the US military-industrial complex, and will overestimate their ability to control what they create (or lack of desire to!). Like, I think many more 'smart nerds' than military-types believe that human replacement is good.
The article seems to assume that the primary motivation for wanting to slow down AI is to buy time for institutional progress. Which seems incorrect as an interpretation of the motivation. Most people that I hear talk about buying time are talking about buying time for technical progress in alignment.
I think you need both? That is--I think you need both technical progress in alignment, and agreements and surveillance and enforcement such that people don't accidentally (or deliberately) create rogue AIs that cause lots of problems.
I think historically many people imagined "we'll make a generally intelligent system and ask it to figure out a way to defend the Earth" in a way that I think seems less plausible to me now. It seems more like we need to have systems in place already playing defense, which ramp up faster than the systems playing offense.
My understanding is that the Lightcone Offices and Lighthaven have 1) overlapping but distinct audiences, with Lightcone Offices being more 'EA' in a way that seemed bad, and 2) distinct use cases, where Lighthaven is more of a conference venue with a bit of coworking whereas Lightcone Offices was basically just coworking.
By contrast, today’s AIs are really nice and ethical. They’re humble, open-minded, cooperative, kind. Yes, they care about some things that could give them instrumental reasons to seek power (eg being helpful, human welfare), but their values are great
They also aren't facing the same incentive landscape humans are. You talk later about evolution to be selfish; not only is the story for humans is far more complicated (why do humans often offer an even split in the ultimatum game?), but also humans talk a nicer game than they act (see construal level theory, or social-desirability bias.). Once you start looking at AI agents who have similar affordances and incentives that humans have, I think you'll see a lot of the same behaviors.
(There are structural differences here between humans and AIs. As an analogy, consider the difference between large corporations and individual human actors. Giant corporate chain restaurants often have better customer service than individual proprietors because they have more reputation on the line, and so are willing to pay more to not have things blow up on them. One might imagine that AIs trained by large corporations will similarly face larger reputational costs for misbehavior and so behave better than individual humans would. I think the overall picture is unclear and nuanced and doesn't clearly point to AI superiority.)
though there’s a big question mark over how much we’ll unintentionally reward selfish superhuman AI behaviour during training
Is it a big question mark? It currently seems quite unlikely to me that we will have oversight systems able to actually detect and punish superhuman selfishness on the part of the AI.
I think it's hard to evaluate the counterfactual where I made a blog earlier, but I think I always found the built-in audience of LessWrong significantly motivating, and never made my own blog in part because I could just post everything here. (There's some stuff that ends up on my Tumblr or w/e instead of LW, even after ShortForm, but almost all of the nonfiction ended up here.)
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn't have a leader they trust and respect, because Catholicism has a longer tradition
As an additional comment, few organizations have splintered more publicly than Catholicism; it seems sort of surreal to me to not check whether or not you ended up on the right side of the splintering. [This is probably more about theological questions than it is about leadership, but as you say, the leadership is relevant!]
It looks like you only have pieces with 2 connections and 6 connections, which works for maximal density. But I think you need some slack space to create pieces without the six axial lines. I think you should include the tiles with 4 connections also (and maybe even the 0-connection tile!) and the other 2-connection tiles; it increases the number by quite a bit but I think will let you make complete knots.