eli_sennesh comments on MIRI's Approach - Less Wrong

34 Post author: So8res 30 July 2015 08:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (59)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 31 July 2015 04:13:21AM *  2 points [-]

My perhaps predictable reply is that this safety could be demonstrated experimentally - for example by demonstrating altruism/benevolence as you scale up the AGI in terms of size/population, speed, and knowledge/intelligence.

There's a big difference between the hopelessly empirical school of machine learning, in which things are shown in experiments and then accepted as true, and real empirical science, in which we show things in small-scale experiments to build theories of how the systems in question behave in the large scale.

You can't actually get away without any theorizing, on the basis of "Oh well, it seems to work. Ship it." That's actually bad engineering, although it's more commonly accepted in engineering than in science. In a real science, you look for the laws that underly your experimental results, or at least causally robust trends.

If the brain is efficient, then successful AGI is highly likely to take the form of artificial brains.

If the brain is efficient, and it is, then you shouldn't try to cargo-cult copy the brain, any more than we cargo-culted feathery wings to make airplanes. You experiment, you theorize, you find out why it's efficient, and then you strip that of its evolutionarily coincidental trappings and make an engine based on a clear theory of which natural forces govern the phenomenon in question -- here, thought.

Comment author: jacob_cannell 31 July 2015 06:55:46AM 3 points [-]

If the brain is efficient, and it is, then you shouldn't try to cargo-cult copy the brain, any more than we cargo-culted feathery wings to make airplanes.

The wright brothers copied wings for lift and wing warping for 3D control both from birds. Only the forward propulsion was different.

make an engine based on a clear theory of which natural forces govern the phenomenon in question -- here, thought.

We already have that - it's called a computer. AGI is much more specific and anthropocentric because it is relative to our specific society/culture/economy. It requires predicting and modelling human minds - and the structure of efficient software that can predict a human mind is itself a human mind.

Comment author: capybaralet 19 September 2015 07:31:53PM 1 point [-]

"the structure of efficient software that can predict a human mind is itself a human mind." - I doubt that. Why do you think this is the case? I think there are already many examples where simple statistical models (e.g. linear regression) can do a better job of predicting some things about a human than an expert human can.

Also, although I don't think there is "one true definition" of AGI, I think there is a meaningful one which is not particularly anthropocentric, see Chapter 1 of Shane Legg's thesis: http://www.vetta.org/documents/Machine_Super_Intelligence.pdf.

"Intelligence measures an agent’s ability to achieve goals in a wide range of environments."

So, arguably that should include environments with humans in them. But to succeed, an AI would not necessarily have to predict or model human minds; it could instead, e.g. kill all humans, and/or create safeguards that would prevent its own destruction by any existing technology.

Comment author: [deleted] 01 August 2015 02:42:11AM 0 points [-]

We already have that - it's called a computer.

What? No.

Comment author: jacob_cannell 01 August 2015 05:52:41AM *  0 points [-]

A computer is a bicycle for the mind. Logic is purified thought, computers are logic engines. General intelligence can be implemented by a computer, but it is much more anthrospecific.

Comment author: [deleted] 03 August 2015 03:41:20AM 3 points [-]

Logic is purified thought

With respect, no, it's just thought with all the interesting bits cut away to leave something so stripped-down it's completely deterministic.

computers are logic engines

Sorta-kinda. They're also arithmetic engines, floating-point engines, recording engines. They can be made into probability engines, which is the beginnings of how you implement intelligence on a computer.