Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: capybaralet 05 January 2017 04:56:37AM 3 points [-]

Looking at what they've produced to date, I don't really expect MIRI and CHCAI to produce that similar of work. I expect Russell's group to be more focused on value learning an corrigibility vs. reliable agent designs (MIRI).

Comment author: capybaralet 26 September 2016 10:48:41PM *  1 point [-]

Does anyone have any insight into VoI plays with Bayesian reasoning?

At a glance, it looks like the VoI is usually not considered from a Bayesian viewpoint, as it is here. For instance, wikipedia says:

""" A special case is when the decision-maker is risk neutral where VoC can be simply computed as; VoC = "value of decision situation with perfect information" - "value of current decision situation" """

From the perspective of avoiding wireheading, an agent should be incentivized to gain information even when this information decreases its (subjective) "value of decision situation". For example, consider a bernoulli 2-armed bandit:

If the agent's prior over the arms is uniform over [0,1], so its current value is .5 (playing arm1), but after many observations, it learns that (with high confidence) arm1 has reward of .1 and arm2 has reward of .2, it should be glad to know this (so it can change to the optimal policy, of playing arm2), BUT the subjective value of this decision situation is less than when it was ignorant, because .2 < .5.

Problems with learning values from observation

0 capybaralet 21 September 2016 12:40AM

I dunno if this has been discussed elsewhere (pointers welcome).

Observational data doesn't allow one to distinguish correlation and causation.
This is a problem for an agent attempting to learn values without being allowed to make interventions.

For example, suppose that happiness is just a linear function of how much Utopamine is in a person's brain.
If a person smiles only when their Utopamine concentration is above 3 ppm, then an value-learner which observes both someone's Utopamine levels and facial expression and tries to predict their reported happiness on the basis of these features will notice that smiling is correlated with higher levels of reported happiness and thus erroneously believe that it is partially responsible for the happiness.

------------------
an IMPLICATION:
I have a picture of value learning where the AI learns via observation (since we don't want to give an unaligned AI access to actuators!).
But this makes it seem important to consider how to make an un unaligned AI safe-enough to perform value-learning relevant interventions.

Comment author: WhySpace 28 August 2016 03:44:40AM 1 point [-]

I actually brought up a similar question in the open thread, but it didn't really go very far. May or may not be worth reading, but it's still not clear to me whether such a thing is even practical. It's likely that all substantially easier AIs are too far from FAI to still be a net good.

I've come a little closer to answering my questions by stumbling on this Future of Humanity Institute video on "Reduced Impact AI". Apparently that's the technical term for it. I haven't had a chance to look for papers on the subject, but perhaps some exist. No hits on google scholar, but a quick search shows a couple mentions on LW and MIRI's website.

Comment author: capybaralet 30 August 2016 12:22:45AM 0 points [-]

It seems like most people think that reduced impact is as hard as value learning.

I think that's not quite true; it depends on details of the AIs design.

I don't agree that "It's likely that all substantially easier AIs are too far from FAI to still be a net good.", but I suspect the disagreement comes from different notions of "AI" (as many disagreements do, I suspect).

Taking a broad definition of AI, I think there are many techniques (like supervised learning) that are probably pretty safe and can do a lot of narrow AI tasks (and can maybe even be composed into systems capable of general intelligence). For instance, I think the kind of systems that are being built today are a net good (but might not be if given more data and compute, especially those based on Reinforcement Learning).

Comment author: moridinamael 29 August 2016 01:17:46PM *  1 point [-]

Is it even possible to have a perfectly aligned AI?

If you teach an AI to model the function f(x) = sin(x), it will only be "aligned" with your goal of computing sin(x) to the point of computational accuracy. You either accept some arithmetic cutoff or the AI turns the universe to computronium in order to better approximate Pi.

If you try to teach an AI something like handwritten digit classification, it'll come across examples that even a human wouldn't be able to identify accurately. There is no "truth" to whether a given image is a 6 or a very badly drawn 5, other than the intent of the person who wrote it. The AI's map can't really be absolutely correct because the notion of correctness is not unambiguously defined in the territory. Is it a 5 because the person who wrote it intended it to be a 5? What if 75% of humans say it's a 6?

Since there will always be both computational imprecision and epistemological uncertainty from the territory, the best you can ever do is probably an approximate solution that captures what is important to the degree of confidence we ultimately decide is sufficient.

Comment author: capybaralet 30 August 2016 12:16:03AM 0 points [-]

I edited to clarify what I mean by "approximate value learning".

Risks from Approximate Value Learning

1 capybaralet 27 August 2016 07:34PM

Solving the value learning problem is (IMO) the key technical challenge for AI safety.
How good or bad is an approximate solution?

EDIT for clarity:
By "approximate value learning" I mean something which does a good (but suboptimal from the perspective of safety) job of learning values.  So it may do a good enough job of learning values to behave well most of the time, and be useful for solving tasks, but it still has a non-trivial chance of developing dangerous instrumental goals, and is hence an Xrisk.

Considerations:

1. How would developing good approximate value learning algorithms effect AI research/deployment?
It would enable more AI applications.  For instance, many many robotics tasks such as "smooth grasping motion" are difficult to manually specify a utility function for.  This could have positive or negative effects:

Positive:
* It could encourage more mainstream AI researchers to work on value-learning.

Negative:
* It could encourage more mainstream AI developers to use reinforcement learning to solve tasks for which "good-enough" utility functions can be learned.
Consider a value-learning algorithm which is "good-enough" to learn how to perform complicated, ill-specified tasks (e.g. folding a towel).  But it's still not quite perfect, and so every second, there is a 1/100,000,000 chance that it decides to take over the world. A robot using this algorithm would likely pass a year-long series of safety tests and seem like a viable product, but would be expected to decide to take over the world in ~3 years.
Without good-enough value learning, these tasks might just not be solved, or might be solved with safer approaches involving more engineering and less performance, e.g. using a collection of supervised learning modules and hand-crafted interfaces/heuristics.

2. What would a partially aligned AI do? 
An AI programmed with an approximately correct value function might fail 
* dramatically (see, e.g. Eliezer, on AIs "tiling the solar system with tiny smiley faces.")
or
* relatively benignly (see, e.g. my example of an AI that doesn't understand gustatory pleasure)

Perhaps a more significant example of benign partial-alignment would be an AI that has not learned all human values, but is corrigible and handles its uncertainty about its utility in a desirable way.

Comment author: root 31 July 2016 08:29:02PM *  0 points [-]

open-source prisoner's dilemma

I believe the GNU GPL was made to address this.

It seems like we are moving in this direction, with things like Etherium that enable smart contracts.

Does anyone have proof that Etherium is secure? There's also the issue of giving whomever runs Etherium complete authority over those 'smart contracts', and that could easily turn into 'pay me to make the contract even smarter'.

Technology should enable us to enforce more real-world precommitments, since we'll be able to more easily monitor and make public our private data.

People are going to adapt. And I see no reason why would anybody share particularly private stuff with everyone.

And then there's the part where things look so awesome they can easily become bad: I can imagine someone being blackmailed into one of those contracts. And plenty of other, 'welcome to the void' kind of stuff.* Where's Voldie when you need him?

Comment author: capybaralet 23 August 2016 05:54:56PM 0 points [-]

People will be incentivized to share private things if robust public precommitments become available, because we all stand to benefit from more information. Because of human nature, we might settle on some agreement where some information is private, or differentially private, and/or where private information is only accessed via secure computation to determine things relevant to the public interest.

Comment author: ChristianKl 01 August 2016 10:54:09AM 0 points [-]

We already have legal contracts to do this. If I make a website and sell a product I however people to cooperate. They can make a contract with me and then I am precommitted to deliever them the product they paid for.

Comment author: capybaralet 23 August 2016 05:52:42PM 0 points [-]

Contracts are limited in what they can include, and require a government to enforce them.

Comment author: Lumifer 01 August 2016 04:09:52PM 2 points [-]

How do you distinguish precommittments from simple contracts?

If you are standing in the market selling apples for dollar a pound, have you precommitted to anything?

Generally speaking, precommittments are expensive because you pay with optionality, the ability to make a choice later. There must be a good reason to precommit, something other than "wouldn't it be generally useful".

Comment author: capybaralet 23 August 2016 05:51:45PM 0 points [-]

Precommitments are more general, since they don't require more than one party, but they are very similar.

Currently, contracts are usually enforced by the government, and there are limits to what can be included in a contract, and the legality of the contract can be disputed.

Binding precommitments would be useful for enabling cooperation in inefficient games: http://lesswrong.com/lw/nv3/inefficient_games/

Inefficient Games

14 capybaralet 23 August 2016 05:47PM

There are several well-known games in which the pareto optima and Nash equilibria are disjoint sets.
The most famous is probably the prisoner's dilemma.  Races to the bottom or tragedies of the commons typically have this feature as well.

I proposed calling these inefficient games.  More generally, games where the sets of pareto optima and Nash equilibria are distinct (but not disjoint), such as a stag hunt could be called potentially inefficient games.

It seems worthwhile to study (potentially) inefficient games as a class and see what can be discovered about them, but I don't know of any such work (pointers welcome!)

View more: Next