MalcolmMcLeod

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

w/r/t the superstimulus part, I am once again urging everyone to read Infinite Jest. The MacGuffin is a movie so good anyone who sees it watches it on loop until they die of dehydration. (And this ofc represents American-pleasure-at-large.)

Amazon's best-seller standings. I wouldn't make too much of this, their categorization is wonky. (I also have no clue what the lookback window is, what they make of preoders, etc.)
#5 in "Technology"

#4,537 in all books

#11 in engineering

#14 in semantics and AI (how is this so much lower than "technology?")

In short: showing up! It could be grabbing someone's eye right now. Still drowned out by Yuval Noah Harari, Ethan Mollick, Ray Kurzweil, et al.

I look forward to hearing about the European "freeze."

Upsetting. Your proposals and tactics are super compelling. 

It seems that (because of the long-term relational nature of Washington), donations toward CAIP are much more valuable in the world where CAIP lasts a long time. This makes me unwilling to donate alone, but much more willing to join a "compact" in which, for instance, the money isn't disbursed unless a certain total $ amount is raised. Or more willing to donate if something else happened to make CAIP survive. 

This seems similarly valuable to Lightcone. If the community comes through, I want to be a part of it. But something beyond this post seems required. 

One of my favorite genres of LW posts is "puts a name and a framework to my deeply felt intuition." This is a sterling example. Have you read Infinite Jest? It's substantially about how tech & America grease our rightward slide on this continuum. 

Here's an example of the sort you asked for. We'll go with Von Neumann himself, who famously advocated for nuking the USSR before they had a chance to develop nukes. From his 1956 obituary in LIFE:

After the Axis had been destroyed, Von Neumann urged that the U.S. immediately build even more powerful atomic weapons and use them before the Soviets could develop nuclear weapons of their own. It was not an emotional crusade, Von Neumann, like others, had coldly reasoned that the world had grown too small to permit nations to conduct their affairs independently of one another. He held that world government was inevitable – and the sooner the better. But he also believed it could never be established while Soviet Communism dominated half of the globe. A famous Von Neumann observation at the time: “With the Russians it is not a question of whether but when.” A hard-boiled strategist, he was one of the few scientists to advocate preventive war, and in 1950 he was remarking, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not 1 o’clock?”

Sure, it's not literally billions, and sure, there's an ultimate pro-human aim, but this is distinctly of the flavor "I am a genius and have reasoned my way to seeing that for my goals to be achieved, millions must die, nothing personal."

(I don't think this should really be a crux, though.)

I expected your comment to be hyperbolic, but no. I mean sheesh:

In the decade that I have been working on AI, I’ve watched it grow from a tiny academic field to arguably the most important economic and geopolitical issue in the world.  In all that time, perhaps the most important lesson I’ve learned is this: the progress of the underlying technology is inexorable, driven by forces too powerful to stop, but the way in which it happens—the order in which things are built, the applications we choose, and the details of how it is rolled out to society—are eminently possible to change, and it’s possible to have great positive impact by doing so.  We can’t stop the bus, but we can steer it.  In the past I’ve written about the importance of deploying AI in a way that is positive for the world and of ensuring that democracies build and wield the technology before autocracies do. 

(Emphasis mine.) What rhetorical cleverness. This translates as: "I have expertise and foresightedness; here's your Overton window." Then he goes gears-level (ish) for a whole essay, reinscribing in the minds of Serious People the lethal assumptions laid out here: "We can't slow down; if you knew what I knew you'd see the 'forces' that make this obvious, and besides do you want the commies to win?"

I'm not just doing polemic. I think the rhetorical strategy "dismissing pause and cooperation out of hand instead of arguing against them" tells us something. I'm not sure what, alas. I do think that labs' arguments to the governments work best if they've already set the terms of the debate. It helps Dario's efforts if "pause/cooperate" is something "all the serious people know" is not worth paying attention to. 

I 80% think he also believes that pausing and cooperation are bad ideas (despite his obvious cognizance of the time-crunch). But I doubt he dismisses it so out-of-hand privately.

I thank y'all for rapidly replicating and extending this eval. This is the most important eval extant. Units are truly comparable, and it's directly connected to the questions of "coding for ML/AI research" and "long-horizon agency" that seem cruxy for short timelines. I did not expect @Daniel Kokotajlo to be right about the superexponentiality so quickly. 
 

My long-timeline probability mass is increasingly dependent on "this doesn't generalize past formally verifiable domains + formally verifiable domains are insufficient for to automate AI algorithmic progress substantially" or "somehow this progress doesn't extend to the arbitrarily messy and novel real world." But it ain't looking good. 

Righteous work. It would be impactful to take versions of this to mainstream media outlets: TIME and Vox have obviously proven receptive, but Bloomberg and the FT aren't deaf to the issues either. Anyone, really. The more distribution the better. Grateful to y'all for saying this out loud and clearly. To folks unfamiliar with the difficulty of alignment, you make nonobvious yet clarifying points.

Load More