All of akarlin's Comments + Replies

akarlin0-2

Frontier LLM performance on offline IQ tests is improving at perhaps 1 S.D. per year, and might have recently become even faster. These tests are a good measure of human general intelligence. One more such jump and there will be PhD-tier assistants for $20/month. At that point, I expect any lingering problems with invoking autonomy to be quickly fixed as human AI research acquires a vast multiplier through these assistants, and a few months later AI research becomes fully automated.

8Thane Ruthenis
Human general intelligence. I think it's abundantly clear that the cognitive features that are coupled in humans are not necessarily coupled in LLMs. Analogy: In humans, the ability to play chess is coupled with general intelligence: we can expect grandmasters to be quite smart. Does that imply Stockfish is a general-purpose hypergenius?
akarlin134

The scenario I am most concerned about is a strongly multipolar Malthusian one. There is some chance (maybe even a fair one) that a singleton or oligopoly ASI decides or rigorously coordinate respectively to preserve the biosphere - including humans - at an adequate or superlative level of comfort or fulfillment, or help them ascend themselves, due to ethical considerations, for research purposes, or simulation/karma type considerations.

In a multipolar scenario of gazillions of AI at Malthusian subsistence levels, none of that matters in the default scenar... (read more)

It's not at all insane IMO. If AGI is "dangerous" x timelines are "short" x anthropic reasoning is valid...

... Then WW3 will probably happen "soon" (2020s).

https://twitter.com/powerfultakes/status/1713451023610634348

I'll develop this into a post soonish.

2Nathan Helm-Burger
I'm hopeful that the politicians of the various nations who might initiate this conflict can see how badly that would turn out for them personally, and thus find sufficient excuses to avoid rushing into that scenario. Not certain by any means, but hopeful. There certainly will need to be some tense negotiations, at the least.
akarlin185

It's ultimately a question of probabilities, isn't it? If the risk is ~1%, we mostly all agree Yudkowsky's proposals are deranged. If 50%+, we all become Butlerian Jihadists.

My point is I and people like me need to be convinced it's closer to 50% than to 1%, or failing that we at least need to be "bribed" in a really big way.

I'm somewhat more pessimistic than you on civilizational prospects without AI. As you point out, bioethicists and various ideologues have some chance of tabooing technological eugenics. (I don't understand your point about assortative ... (read more)

lc*2123

It's ultimately a question of probabilities, isn't it? If the risk is ~1%, we mostly all agree Yudkowsky's proposals are deranged. If 50%+, we all become Butlerian Jihadists.

Uhh... No, we don't? 1% of 8 billion people is 80 million people, and AI risk involves more at stake if you loop in the whole "no more new children" thing. I'm not saying that "it's a small chance of a very bad thing happening so we should work on it anyways" is a good argument, but if we're taking as a premise is that the chance of failure is 1%, that'd be sufficient to justify sev... (read more)

akarlin41-28

I disagree with AI doomers, not in the sense that I consider it a non-issue, but that my assessment of the risk of ruin is something like 1%, not 10%, let alone the 50%+ that Yudkowsky et al. believe. Moreover, restrictive AI regimes threaten to produce a lot of outcomes things, possibly including the devolution of AI control into a cult (we have a close analogue in post-1950s public opinion towards civilian applications of nuclear power and explosions, which robbed us of Orion Drives amongst other things), what may well be a delay in life extension timeli... (read more)

-9Juan Panadero

I totally get where you're coming from, and if I thought the chance of doom was 1% I'd say "full speed ahead!"

As it is, at fifty-three years old, I'm one of the corpses I'm prepared to throw on the pile to stop AI. 

The "bribe" I require is several OOMs more money invested into radical life extension research

Hell yes. That's been needed rather urgently for a while now. 

2Foyle
Over what time window does your assessed risk apply.  eg 100years, 1000?  Does the danger increase or decrease with time? I have deep concern that most people have a mindset warped by human pro-social instincts/biases.  Evolution has long rewarded humans for altruism, trust and cooperation, women in particular have evolutionary pressures to be open and welcoming to strangers to aid in surviving conflict and other social mishaps, men somewhat the opposite [See eg "Our Kind" a mass market anthropological survey of human culture and psychology] .   Which of course colors how we view things deeply. But to my view evolution strongly favours Vernor Vinge's "Aggressively hegemonizing" AI swarms ["A fire upon the deep"].  If AIs have agency, freedom to pick their own goals, and ability to self replicate or grow, then those that choose rapid expansion as a side-effect of any pretext 'win' in evolutionary terms.  This seems basically inevitable to me over long term.  Perhaps we can get some insurance by learning to live in space.  But at a basic level it seems to me that there is a very high probability that AI wipes out humans over the longer term based on this very simple evolutionary argument, even if initial alignment is good.
9Rufus Pollock
A 1% probability of "ruin" i.e. total extinction (which you cite is your assessment) would still be more than enough to warrant complete pausing for a lengthy period of time. There seems to be a basic misunderstanding of expected utility calculations here where people are equating the weighting on an outcome with a simple probability x cost of outcome e.g. if there is a 1% chance of the 8 billion dying the "cost" of that is not 80 million lives (as someone further down this thread computes). Normally the way you'd think about this (if you want to do math to stuff like this) is to think about what you'd pay to avoid that outcome using Expected Utility. This weights over the entire probability distribution with their expected (marginal utility). In this case, marginal utility goes to infinity if we go extinct (unless you are in the camp: let the robots take over!) and hence even small risks  of it would warrant us doing everything possible to avoid it. This is essentially precautionary principle territory. 
lc*3129

Couple of points:

  • If we screw this up, there are over eight billion people on the planet, and countless future humans who might either then die or never get a chance to be born. Even if you literally don't care about future people, the lives of everybody currently on the planet is a serious consideration and should guide the calculus. Just because those dying now are more salient to us does not mean that we're doing the right thing by shoving these systems out the door.
  • If embryo selection just doesn't happen, or gets outlawed when someone does launch the
... (read more)