Manfred comments on What can go wrong with the following protocol for AI containment? - Less Wrong

0 Post author: ZoltanBerrigomo 11 January 2016 11:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread.

Comment author: Manfred 12 January 2016 01:42:17AM 1 point [-]

Even if the huge computing or algorithmic advances needed fell out of the sky tomorrow, this scheme still doesn't seem like it solves the problems we really want it to solve, because it does not allow the the agents to learn anything interesting about our world.

Comment author: Slider 12 January 2016 10:17:13AM 0 points [-]

if I try to calculate 6 divided by 3 on a calculator and it answers "you need to exercise more" ahve I been served better because it answered a better question?

Comment author: Manfred 12 January 2016 10:56:59AM 0 points [-]

Pretend that instead of "exercise more," the calculator gave you advice that was actually valuable. Then yes. Just because you expect it to be a calculator doesn't mean it can't be more valuable if it can do more valuable things.

Comment author: ZoltanBerrigomo 12 January 2016 05:59:07AM *  0 points [-]
  1. When talking about dealing and (non)interacting with real AIs, one is always talking about a future world with significant technological advances relative to our world today.

  2. If we can formulate something as a question about math, physics, chemistry, biology, then we can potentially attack it with this scheme. These are definitely problems we really want to solve.

  3. Its true that if we allow AIs more knowledge and more access to our world, they could potentially help us more -- but of course the number of things that can go wrong has to increase as well. Perhaps a compromise which sacrifices some of the potential while decreasing the possibilities that can go wrong is better.