All of havequick's Comments + Replies

I'm curious though if you have any hopes for the situation regarding the nebulous motivations of some AGI researchers, especially as AI and its risks have recently started becoming "mainstream". Do you expect to see changes and their views challenged? My question is loaded, but it seems you are already invested in its answer.

I think there's a case to be made for AGI/ASI development and deployment as a "hostis humani generis" act; and others have made the case as well. I am confused (and let's be honest, increasingly aghast) as to why AI doomers rarely ... (read more)

The risk that it simply ends up being owned by the few who create it leading thus to a total concentration of the productive power of humanity isn't immaterial, in fact it looks like the default outcome.

Yes, this is why I've been frustrated (and honestly aghast, given timelines) at the popular focus on AI doom and paperclips rather than the fact that this is the default (if not nigh-unavoidable) outcome of AGI/ASI, even if "alignment" gets solved. Comparisons with industrialization and other technological developments are specious because none of them had the potential to do anything close to this.

3dr_s
I think the doom narrative is still worth bringing up because this is what these people are risking for all of us in the pursuit of essentially conquering the world and/or personal immortality. That's the level of insane supervillainy that this whole situation actually translates to. Just because they don't think they'll fail doesn't mean they're not likely to. I'm also disappointed that the political left is dropping the ball so hard on opposing AI, turning to either contradictory "it's really stupid, just a stochastic parrot, and also threatens our jobs somehow" statements, or focusing on details of its behaviour. There's probably something deeper to say about capitalists openly making a bid to turn labour itself into capital.

Wouldn't an important invention such as the machine gun or obviously fission weapons fit your argument pattern? You could make a reasonable case that, like a world with technological unemployment, worlds where humans are cheap to slaughter is overall worse. That if you could coordinate with the world powers at that time to agree to an "automatic weapon moratorium" it would resort in a better world.

The problem is Kaiser Wilhelm and other historical leaders are going to say "suuurrrreee", agree to the deal, and you already know the nasty surprise any pow

... (read more)
2[anonymous]
I agree technological unemployment is a huge potential problem. Though like always the actual problem is aging. I think what people miss is they think of tasks to be done as a fixed pool, you don't need more than 1 vehicle per person or less, or 1 dwelling, or n hours per year of medical care, or food etc. And neglect how AGI clearly cannot be trusted to do many things regardless of capabilities, there would need to be a fleet of human overseers armed with advanced tools. It's just what do you do for a 50 year old truck driver, expecting them to retrain to be an O'Neil colony construction supervisor doesn't make sense unless you can treat their aging and restore neural plasticity. Which is itself an immense megaproject not being done. Bet aging research would go a lot faster if we had the functional equivalent of a billion people working on it, and all billion are informed as to everyone else's research outcomes. Where I was going for in the analogy was much simpler. You don't get a choice. In the immediate term, agreeing to not build machine guns and honoring it means you face a rat tat tat when it matters most. Similar for fission weapons, obviously your enemy is going to build a nuclear arsenal and try to vaporize all your key cities in a surprise attack. The issues you mention happen long term. In the short term you can use AGI to automate many key tasks and become vastly more economically and militarily powerful.

I very much agree with you here and in your AGI deployment as an act of aggression post; the overwhelming majority of humans do not want AGI/ASI and its straightforward consequences (total human technological unemployment and concomitant abyssal social/economical disempowerment), regardless of what paradisaical promises (for which there is no recourse if they are not granted: economically useless humans can't go on strike, etc) are promised them.

The value (this is synonymous with "scarcity") of human intelligence and labor output has been a foundation of e... (read more)

I have read your comments on the EA forum and the points do resonate with me. 

As a layman, I do have a personal distrust with the (what I'd call) anti-human ideologies driving the actors you refer to and agree that a majority of people do as well. It is hard to feel much joy in being extinct and replaced by synthetic beings in probably a way most would characterize as dumb (clippy being the extreme)

I also believe that fundamental changing of the human subjective experience (radical bioengineering or uploading to an extent) in order to erase the abilit... (read more)

1[anonymous]
First, let me say I appreciate you expressing your viewpoint and it does strike an emotional chord with me. With that said, Wouldn't an important invention such as the machine gun or obviously fission weapons fit your argument pattern? You could make a reasonable case that, like a world with technological unemployment, worlds where humans are cheap to slaughter is overall worse. That if you could coordinate with the world powers at that time to agree to an "automatic weapon moratorium" it would resort in a better world. The problem is Kaiser Wilhelm and other historical leaders are going to say "suuurrrreee", agree to the deal, and you already know the nasty surprise any power honoring such a deal will face on the battlefield. (Or Stalin would have said "sureee" to such a deal on fission weapons, and we can assume would immediately renege and test the devices in secret, only announcing their existence with a preemptive first strike on the enemies of the USSR). What's different now? Is there a property about AGI/ASI that makes such international agreements more feasible? To add one piece of information that may not be well known: I work on inference accelerator ASICs and they are significantly simpler than GPUs. A large amount of Nvidias stack isn't actually necessary if pure AI perf/training is your goal. So the only real bottleneck to monitor AI accelerators is that wafer processing equipment currently comes exclusively from asml for the highest end equipment, creating a monitorable supply chain for now. All bets are off if major superpowers build their own domestic equivalents, which they would be strongly incentivized to do in worlds where we know AGI is possible and have built working examples.
3dr_s
Yup. These precise points were also the main argument of my other post on a post-AGI world, the benevolence of the butcher. Also due to the AI discourse I've actually ended up learning more about the original Luddites and, hear hear, they actually weren't the fanatical, reactionary anti-technology ignorant peasants that popular history mainly portrays them as. They were mostly workers who were angry about the way the machines were being used, not to make labour easier and safer, but to squeeze more profit out of less skilled workers to make lower quality products which in the end left almost everyone involved worse off except for the ones who owned the factories. That's I think something we can relate to even now, and I'd say is even more important in the case of AGI. The risk that it simply ends up being owned by the few who create it leading thus to a total concentration of the productive power of humanity isn't immaterial, in fact it looks like the default outcome.

AGI is potentially far more useful and powerful than nuclear weapons ever were, and also provides a possible route to breaking the global stalemate with nuclear arms.

If this is true -- or perceived to be true among nuclear strategy planners and those with the authority to issue a lawful launch order -- it might creates disturbingly (or delightfully; if you see this as a way to prevent the creation of AGI altogether) strong first-strike incentives for nuclear powers which don't have AGI, don't want to see their nuclear deterrent turned to dust, and don't want to be put under the sword of an adversary's AGI.

2[anonymous]
The current economics "board" has every power with enough GDP to potentially build AGI/ASI protected by their own nuclear weapons or mutual defense treaties. So the party considering a first strike has "national death and loss of all major cities" and "under the sword of the adversary" as their outcomes. As well as the always hopeful "maybe the adversary won't actually attack but get what they want via international treaties" as outcomes. Put this way it looks more favorable not to push the button, let me know how your analysis differs.
3dr_s
My idea too, I actually did mention that in a post https://www.lesswrong.com/posts/otArJmyzWgfCxNZMt/agi-deployment-as-an-act-of-aggression.

Re "they should also pop out in a sane person's mind within like 15 seconds of actually hearing that statement" I agree with that in the abstract; few people will say that a state of high physiological alertness/vigilance is Actually A Good Idea to cultivate for threats/risks not usefully countered by the effects of high physiological alertness.

Being able to reason about that in the abstract doesn't necessarily transfer to actually stopping doing that. Like personally, I feel like being told something along the line of "you're working yourself up into a co... (read more)

1Noosphere89
I want to mention here that the war example is an example of where there is an adversarial scenario, or adversarial game, and applying an adversarial frame is usually not the correct decision to do, and importantly given that the most perverse scenarios usually can't be dealt with without exotic physics due to computational complexity reason, you usually shouldn't focus on adversarial scenarios, and here Kaj Sotala is very, very correct on this post.
3Kaj_Sotala
Thank you for sharing that, I'm happy to hear it. :)