Lucas Pfeifer

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Social, economic, or environmental changes happen relatively slowly, on the scale of months or years, compared to potent weapons, which can destroy whole cities in a single day. Therefore, conventional weapons would be a much more immediate danger if corrupted by an AI. The other problems are important to solve, yes, but first humanity must survive its more deadly creations. The field of cybersecurity will continue to evolve in the coming decades. Hopefully world militaries can keep up, so so that no rogue intelligence gains control of these weapons.

It is a broad definition, yes, for the purpose of discussing the potential for the tools in question to be used against humans.

My point is this: we should focus first on limiting the most potent vectors of attack: those which involve conventional 'weapons'. Less potent vectors, (those that are not commonly considered as weapons) such as a 'stock trading algorithm', are of lower priority, since they offer more opportunities for detection and mitigation. 

An algorithm that amasses wealth should eventually set off red flags (maybe banks need to improve their audits and identification requirements).  Additionally, wealth is only useful when spent on a specific purpose. Those purposes could be countered by a government, if the government possesses sufficient 'weapons' to eliminate the offending machines.

If this algorithm takes such subtle actions that cannot be detected in time to prevent catastrophe, then we are doomed. However, there is also the likelihood that the algorithm will have weaknesses which allow it to be detected.

Entities compete in various ways, yes. Competition is an attack on another entities' chances of survival. Let's define a weapon as any tool which could be used to mount an attack. Of course, every tool could be used as a weapon, in some sense. It's a question of how much risk our tools pose to us, if they were to be used against us.

These memes have been magnified by the words of politicians and media. We need our leaders to discuss things more reasonably. 

That said, restricting social media could also make sense. A requirement for in-person verification and limitation to a single account per site could be helpful.

More stringent (in-person) verification of bank account ownership could mitigate this risk.

Anyways, the chance of discovery for any covert operation is proportional to the size of the operation and the time that it takes to execute. The more we pre-limit the tools available to a rogue machine to cause harm immediate harm, the more likely we will catch it in the act.

Which kinds of power do you refer to? Most kinds of power require human cooperation. The danger that an AI tricks us into destroying ourselves is small (though a false detection of nuclear weapons could do it). We need much more cooperation between world leaders, a much more positive dialogue between them.

Abstraction means assigning a symbol to reference a set of other symbols. It saves time and memory: time by allowing retrieval of data based on a set of rules, memory by shrinking the size of the reference. 

For example: the words 'natural' and 'artificial': we sort things into one of these labels based on whether or not they were made by a human. A 'natural' thing could be 'physical' or 'biological'. An 'artificial' thing could be 'theory' or 'implementation'. If I don't need to distinguish between physical and biological things, instead of referring to them directly, I can use the more abstract reference of 'natural' things, saving space and time in my statement.

The challenge with natural language abstraction is agreeing on definitions. Many would define the terms in the above example differently. The more we can agree on definitions of terms, the better we can reason about their subsets.

In a logically valid system of abstraction, any symbol can be related to every other symbol: either by a common parent reference, or by using one to refer to the other.

We should sort reasoning into the inductive and deductive types: inductive provides a working model, deductive provides a more consistent (less contradictory) model. Deductive conclusions are guaranteed to be true, as long as their premises are true. Inductive conclusions are held with a degree of confidence, and depend on how well the variables in the study were isolated. For the empire example in the original post, there are many variables other than computing power that affect the rise and fall of empires. Computing power is only one of many technologies, and besides technology, there is finance, military, culture, food, health, education, natural disaster, religion, etc. Adding to the uncertainty is the small sample size, relative to the number of variables.

However, we can more easily isolate the effect of computing power on census taking, as mentioned, just as we can draw a more confident conclusion between the printing press and literacy rates. Everything has its scale. Relate big to big, medium to medium, small to small. Build up a structure of microscopic relations to find macroscopic patterns.

Yes, the linked post makes a lot of sense: wet labs should be heavily regulated.

Most of the disagreement here is based on two premises:

A: Other vectors (wet labs, etc.) present a greater threat. Maybe, though intelligent weapons are the most clearly misanthropic variant of AI.

B: AI will become so powerful, so quickly, that limiting its vectors of attack will not be enough.

If B is true, the only solution is a general ban on AI research. However, this would need to be a coordinated effort across the globe. There is far more support for halting intelligent weapons development than for a general ban. A general ban could come as a subsequent agreement.

Load More