Mark_Friedenbach comments on AlphaGo versus Lee Sedol - Less Wrong

17 Post author: gjm 09 March 2016 12:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (183)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 10 March 2016 07:01:12PM 5 points [-]

They're working on figuring out what we want the AGI to do, not building one. (I believe Nate has stated this in previous LW comments.)

Comment author: [deleted] 10 March 2016 07:11:27PM -1 points [-]

Which unfortunately presumes that an AGI would be tasked with doing something and given free reign to do so, a truly naïve and unlikely outcome.

Comment author: Kaj_Sotala 10 March 2016 07:41:34PM 3 points [-]

How does it presume that?

Comment author: [deleted] 10 March 2016 09:42:40PM -2 points [-]

They're working on figuring out what we want the AGI to do

Aka friendliness research. But why does that matter? If the machine has no real effectors and lots of human oversight, then why should there even be concern over friendliness? It wouldn't matter in that context. Tell a machine to do something, and it finds an evil-stupid way of doing it, and human intervention prevents any harm.

Why is it a going concern at all whether we can assure ahead of time that the actions recommended by a machine are human-friendly unless the machine is enabled to independently take those actions without human intervention? Just don't do that and it stops being a concern.

Comment author: Kaj_Sotala 11 March 2016 11:23:04AM *  3 points [-]

Humanity is having trouble coordinating and enforcing even global restrictions in greenhouse gasses. Try ensuring that nobody does anything risky or short-sighted with a technology that has no clearly-cut threshold between a "safe" and "dangerous" level of capability, and which can be beneficial for performing in pretty much any competitive and financially lucrative domain.

Restricting the AI's capabilities may work for a short while, assuming that only a small group of pioneers manages to develop the initial AIs and they're responsible with their use of the technology - but as Bruce Schneier says, today's top-secret programs become tomorrow's PhD theses and the next day's common applications. If we want to survive in the long term, we need to figure out how to make the free-acting AIs safe, too - otherwise it's just a ticking time bomb before the first guys accidentally or intentionally release theirs.

Comment author: TheAncientGeek 11 March 2016 02:42:55PM *  2 points [-]

Humanity has done more than zero and less that optimality about things like climate change. Importantly, the situation isbelow the immanent existential threat level.

If you are going to complain that alternative proposals face coordination problems, you need to show that yours dont, or you are committing the fallacy of the dangling comparision. If people aren't going to refrain from building dangerously powerful superintellugences, assuming is possible, why would they have the sense to fit MIRIs safety features, assuming they are possible? If the law can make people fit safety features, why cant it prevent them building dangerous AIs ITFP?

no clearly-cut threshold between a "safe" and "dangerous" level of capability

I would suggest a combination of generality and agency. And what problem domain requires both?

Comment author: Kaj_Sotala 11 March 2016 06:31:27PM *  3 points [-]

If you allow for autonomously acting AIs, then you could have Friendly autonomous AIs tracking down and stopping Unfriendly / unauthorized AIs.

This of course depends on people developing the Friendly AIs first, but ideally it'd be enough for only the first people to get the design right, rather than depending on everyone being responsible.

Importantly, the situation isbelow the immanent existential threat level.

It's unclear whether AI risk will become obviously imminent, either. Goertzel & Pitt 2012 argue in section 3 of their paper that this is unlikely.

I would suggest a combination of generality and agency. And what problem domain requires both?

Business (which by nature covers just about every domain in which you can make a profit, which is to say just about every domain relevant for human lives), warfare, military intelligence, governance... (see also my response to Mark)

Comment author: TheAncientGeek 13 March 2016 09:04:21PM *  1 point [-]

If you allow for autonomously acting AIs, then you could have Friendly autonomous AIs tracking down and stopping Unfriendly / unauthorized AIs.

You could, but if you don't have autonomously acting agents, you don't need Gort AIs. Building an agentive superintelligence that is powerful enough to take down any othe, as as MIRI conceives it, is a very risky proposition, since you need to get the value system exactly right. So its better not to be in a place where you have to do that,

This of course depends on people developing the Friendly AIs first, but ideally it'd be enough for only the first people to get the design right, rather than depending on everyone being responsible.

The first people have to be able as well as willing to get everything right, Safety through restraint is easier and more reliable. -- you can omit a feature more reliably than you can add one.

Business (which by nature covers just about every domain in which you can make a profit, which is to say just about every domain relevant for human lives), warfare, military intelligence, governance...

These organizations have a need for widespread intelligence gathering , and for agentive AI, but that doesn't mean they need both in the same package. The military don't need their entire intelligence database in every drone, and don't want drones that change their mind about who the bad guys are in mid flight. Businesses don't want HFT applications that decide capitalism is a bad thing.

We want agents to act on our behalf, which means we want agents that are predictable and controllable to the required extent. Early HFT had problems which led to the addition of limits and controls. Control and predictability are close to safety. There is no drive to power that is also a drive away from safety, because uncontrolled power is of no use.

Based on the behaviour of organisations, there seems to be natural division between high-level, unpredictable decision information systems and lower level, faster acting genitive systems. In other words, they voluntarily do some of what would be required for an incremental safety programme.

Comment author: Kaj_Sotala 14 March 2016 09:28:44AM 0 points [-]

I agree that it would be better not to have autonomously acting AIs, but not having any autonomously acting AIs would require a way to prevent anyone deploying them, and so far I haven't seen a proposal for that that'd seem even remotely feasible.

And if we can't stop them from being deployed, then deploying Friendly AIs first looks like the scenario that's more likely to work - which still isn't to say very likely, but at least it seems to have a chance of working even in principle. I don't see that an even-in-principle way for "just don't deploying autonomous AIs" to work.

Comment author: TheAncientGeek 15 March 2016 09:58:44AM *  0 points [-]

When you say autonomous AIs, do you mean AIs that are autonomous and superinteligent?

Do you think they could he deployed by basement hackers, or only by large organisations?

Do you think an organisation like the military or business has a motivation to deploy them?

Do you agree that there are dangers to an FAI project that goes wrong?

Do you have a plan B to cope with a FAI that goes rogue?

Do you think that having a AI potentially running the world is an attractive idea to a lot of people?

Comment author: Lumifer 14 March 2016 03:06:56PM 0 points [-]

then you could have Friendly autonomous AIs tracking down and stopping Unfriendly / unauthorized AIs.

Somehow that reminds me of Sentinels from X-Men: Days of Future Past.

Comment author: [deleted] 11 March 2016 04:00:10PM -1 points [-]

I think you very much misunderstand my suggestion. I'm saying that there is no reason to presume AI will be given the keys to the kingdom from day one, not advocating for some sort of regulatory regime.

Comment author: Kaj_Sotala 11 March 2016 06:28:08PM 3 points [-]

So what do you see as the mechanism that will prevent anyone from handing the AI those keys, given the tremendous economic pressure towards doing exactly that?

As we discussed in Responses to AGI Risk:

As with a boxed AGI, there are many factors that would tempt the owners of an Oracle AI to transform it to an autonomously acting agent. Such an AGI would be far more effective in furthering its goals, but also far more dangerous.

Current narrow-AI technology includes HFT algorithms, which make trading decisions within fractions of a second, far too fast to keep humans in the loop. HFT seeks to make a very short-term profit, but even traders looking for a longer-term investment benefit from being faster than their competitors. Market prices are also very effective at incorporating various sources of knowledge [135]. As a consequence, a trading algorithmʼs performance might be improved both by making it faster and by making it more capable of integrating various sources of knowledge. Most advances toward general AGI will likely be quickly taken advantage of in the financial markets, with little opportunity for a human to vet all the decisions. Oracle AIs are unlikely to remain as pure oracles for long.

Similarly, Wallach [283] discuss the topic of autonomous robotic weaponry and note that the US military is seeking to eventually transition to a state where the human operators of robot weapons are ‘on the loop’ rather than ‘in the loop’. In other words, whereas a human was previously required to explicitly give the order before a robot was allowed to initiate possibly lethal activity, in the future humans are meant to merely supervise the robotʼs actions and interfere if something goes wrong.

Human Rights Watch [90] reports on a number of military systems which are becoming increasingly autonomous, with the human oversight for automatic weapons defense systems—designed to detect and shoot down incoming missiles and rockets—already being limited to accepting or overriding the computerʼs plan of action in a matter of seconds. Although these systems are better described as automatic, carrying out pre-programmed sequences of actions in a structured environment, than autonomous, they are a good demonstration of a situation where rapid decisions are needed and the extent of human oversight is limited. A number of militaries are considering the future use of more autonomous weapons.

In general, any broad domain involving high stakes, adversarial decision making and a need to act rapidly is likely to become increasingly dominated by autonomous systems. The extent to which the systems will need general intelligence will depend on the domain, but domains such as corporate management, fraud detection and warfare could plausibly make use of all the intelligence they can get. If oneʼs opponents in the domain are also using increasingly autonomous AI/AGI, there will be an arms race where one might have little choice but to give increasing amounts of control to AI/AGI systems.

Miller [189] also points out that if a person was close to death, due to natural causes, being on the losing side of a war, or any other reason, they might turn even a potentially dangerous AGI system free. This would be a rational course of action as long as they primarily valued their own survival and thought that even a small chance of the AGI saving their life was better than a near-certain death.

Some AGI designers might also choose to create less constrained and more free-acting AGIs for aesthetic or moral reasons, preferring advanced minds to have more freedom.

Comment author: TheAncientGeek 11 March 2016 01:49:02PM *  1 point [-]

I suspect that this dates back to a time when MIRI believed the answer to AI safety was to both build an agentive, maximal supeintelligence and align its values with ours, and put it in charge of all the other AIs.

The first idea has been effectively shelved, since MIRI had produced about zero lines of code,..but the idea that AI safety is value alignment continues with considerable momentum. And value alignment only makes sense if you are building an agentive AI (and have given up on corrigibility).