It is a broad definition, yes, for the purpose of discussing the potential for the tools in question to be used against humans.
My point is this: we should focus first on limiting the most potent vectors of attack: those which involve conventional 'weapons'. Less potent vectors, (those that are not commonly considered as weapons) such as a 'stock trading algorithm', are of lower priority, since they offer more opportunities for detection and mitigation.
An algorithm that amasses wealth should eventually set off red flags (maybe banks need to improve th...
Entities compete in various ways, yes. Competition is an attack on another entities' chances of survival. Let's define a weapon as any tool which could be used to mount an attack. Of course, every tool could be used as a weapon, in some sense. It's a question of how much risk our tools pose to us, if they were to be used against us.
These memes have been magnified by the words of politicians and media. We need our leaders to discuss things more reasonably.
That said, restricting social media could also make sense. A requirement for in-person verification and limitation to a single account per site could be helpful.
More stringent (in-person) verification of bank account ownership could mitigate this risk.
Anyways, the chance of discovery for any covert operation is proportional to the size of the operation and the time that it takes to execute. The more we pre-limit the tools available to a rogue machine to cause harm immediate harm, the more likely we will catch it in the act.
Which kinds of power do you refer to? Most kinds of power require human cooperation. The danger that an AI tricks us into destroying ourselves is small (though a false detection of nuclear weapons could do it). We need much more cooperation between world leaders, a much more positive dialogue between them.
Yes, we need to solve the harder alignment problems as well. I suggested limited intelligent weapons as the first step, because these are the most obviously misanthropic AI being developed, and the clearest vector of attack for any rogue AI. Why don't we focus on that first, before we focus on the more subtle vectors.
The end of the post you linked said, basically, "we need a plan". Do you have a better one?
Abstraction means assigning a symbol to reference a set of other symbols. It saves time and memory: time by allowing retrieval of data based on a set of rules, memory by shrinking the size of the reference.
For example: the words 'natural' and 'artificial': we sort things into one of these labels based on whether or not they were made by a human. A 'natural' thing could be 'physical' or 'biological'. An 'artificial' thing could be 'theory' or 'implementation'. If I don't need to distinguish between physical and biological things, instead of referring ...
We should sort reasoning into the inductive and deductive types: inductive provides a working model, deductive provides a more consistent (less contradictory) model. Deductive conclusions are guaranteed to be true, as long as their premises are true. Inductive conclusions are held with a degree of confidence, and depend on how well the variables in the study were isolated. For the empire example in the original post, there are many variables other than computing power that affect the rise and fall of empires. Computing power is only one of many technologie...
Yes, the linked post makes a lot of sense: wet labs should be heavily regulated.
Most of the disagreement here is based on two premises:
A: Other vectors (wet labs, etc.) present a greater threat. Maybe, though intelligent weapons are the most clearly misanthropic variant of AI.
B: AI will become so powerful, so quickly, that limiting its vectors of attack will not be enough.
If B is true, the only solution is a general ban on AI research. However, this would need to be a coordinated effort across the globe. There is far more support for halting intelligent weapons development than for a general ban. A general ban could come as a subsequent agreement.
Superintelligence is inherently dangerous, yes. The rapid increase in capabilities is inherently destabilizing, yes. However, practically speaking, we humans can handle and learn from failure, provided it is not catastrophic. An unexpected superintelligence would be catastrophic. However, it will be hard to convince people to abandon currently benign AI models on the principle that they could spontaneously create a superintelligence. A more feasible approach would start with the most dangerous and misanthropic manifestations of AI: those that are specialized to kill humans.
Best to slow down the development of AI in sensitive fields until we have a clearer understanding of its capabilities.
"Advocacy pushes you down a path of simplifying ideas rather than clearly articulating what's true, and pushing for consensus for the sake of coordination regardless of whether you've actually found the right thing to coordinate on."
Human mercenaries causing a societal collapse? That would mean a large number of individuals who are willing to take orders from a machine to actively harm their communities. Very unlikely.
I'm wondering how you can hold that position given all the recent social disorder we've seen all over the world where social media driven outrage cycles have been a significant accelerating factor. People are absolutely willing to "take orders from a machine" (i.e. participate in collective action based on memes from social media) in order to "harm their communities" (i.e. cause violence and property destruction).
Yes, in the long term we will need a complete alignment strategy, such as permanent integration with our brains. However, before that happens, it would be prudent to limit the potential for a misaligned AI to cause permanent damage.
And, yes, we are in need of a more concrete plan and commitment from the people involved in the tech, especially with regards to lethal AI.
I weak-downvoted this: in general I think it is informative for people to just state their opinion, but in this case the opinion had very little to do with the content of the post and was not argued for. The linked post also did not engage with any of the existing arguments around TAI risk.
(Not that I disagree with "limiting the spread of autonomous weapons is going to lead to fewer human deaths in expectation", but I don't think it is the best strategy to limit such kinds of impact.)
Given the unpredictable emergent behavior in researchers' AI models, we will likely see emergent AI behavior with real-world consequences. We can limit these consequences by limiting the potential vectors of malignant behavior, the primary being autonomous lethal weapons. See my post and underlying comments for further details:
https://www.lesswrong.com/posts/b2d3yBzzik4hajGni/limit-intelligent-weapons
Existential danger is very much related to weapons. Of course, AI could pose an existential threat without access to weapons. However, weapons provide the most dangerous vector of attack for a rogue, confused, or otherwise misanthropic AI. We should focus more on this immediate and concrete risk before the more abstract theories of alignment.
Yes, sometimes we need to prevent humans from causing harm. For sub-national cases, current technology is sufficient for this. On the scale of nations, we should agree to concrete limits on the intelligence of weapons, and have faith in our fellow humans to follow these limits. Our governments have made progress on this issue, though there is more to be made.
For example:
https://www.csis.org/analysis/one-key-challenge-diplomacy-ai-chinas-military-does-not-want-talk
"With such loud public support in prominent Chinese venues, one might think that the U.S. mili...
Suppose an AI was building autonomous weapons in secret. This would involve some of the most advanced technology currently available. It would need to construct a sophisticated factory in a secret location, or else hide it in a shell company. The first would be very unlikely, the second is plausible, though still less likely. Better regulation and examination of weapons manufacturers could help mitigate this problem.
Items of response:
Social, economic, or environmental changes happen relatively slowly, on the scale of months or years, compared to potent weapons, which can destroy whole cities in a single day. Therefore, conventional weapons would be a much more immediate danger if corrupted by an AI. The other problems are important to solve, yes, but first humanity must survive its more deadly creations. The field of cybersecurity will continue to evolve in the coming decades. Hopefully world militaries can keep up, so so that no rogue intelligence gains control of these weapons.