TheAncientGeek comments on The Brain as a Universal Learning Machine - Less Wrong

82 Post author: jacob_cannell 24 June 2015 09:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 25 June 2015 08:26:19PM 1 point [-]

We have some initial ideas for computable versions of curiosity and controlism (there is not a good word in english for the desire/drive to be in control).

Autonomy? Arguably that's Greek...

I like it.

I do not believe the demand for or potential of oracle AI is remotely comparable to agentive AI. People will want agents to do their bidding, create wealth for them, help them live better, etc.

(Replying to my own text above). On consideration this is wrong - Google is an oracle-AI more or less, and there is high demand for that. The demand for agenty AI is probably much greater, but there is still a role/demand for oracle AI and alot of other stuff in between.

So there are good reasons for thinking that installing subsets of human value would be both easier and safer.

Totally. I think this also goes hand in hand with understanding more about human values - how they evolved, how they are encoded, what is learned or not etc.

Altruism, in particular is not needed for a limited agentive AI. Such AIs would perform specialised tasks, leaving it to humans to stitch the results into something that fulfils their values. We don't want a Google car that takes us where it guesses we want to go

Of course - there are many niches for more specialized or limited agentive AI, and these designs probably don't need altruism. That's important more for the complex general agents, which would control/manage the specialists, narrow AIs, other software, etc.

Comment author: TheAncientGeek 26 June 2015 08:21:52AM 3 points [-]

That's important more for the complex general agents, which would control/manage the specialists, narrow AIs, other software, etc.

That seems to be re introducing God AI. I think people would want to keep humans in the loop. That's both a prediction, and a means of AI safety.