timtyler comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 03 March 2010 09:56:13PM 0 points [-]

FWIW, I'm thinking of intelligence this way:

“Intelligence measures an agent’s ability to achieve goals in a wide range of environments."

Nothing to do with humans, really.

Comment author: SilasBarta 03 March 2010 10:03:40PM 0 points [-]

Then why should I care about intelligence by that definition? I want something that performs well in environments humans will want it to perform well in. That's a tiny, tiny fraction of the set of all computable environments.

Comment author: timtyler 03 March 2010 10:28:26PM -1 points [-]

A universal intelligent agent should also perform very well in many real world environments. That is part the beauty of the idea of universal intelligence. A powerful universal intelligence can be reasonably expected to invent nanotechnology, fusion, cure cancer, and generally solve many of the world's problems.

Comment author: SilasBarta 03 March 2010 10:31:59PM 1 point [-]

Oracles for uncomputable problems tend to be like that...

Comment author: SilasBarta 03 March 2010 10:35:16PM 0 points [-]

Also, my point is that, yes, something impossibly good could do that. And that would be good. But performing well across all computable universes (with a sorta-short description, etc.) has costs, and one cost is optimality in this universe.

Since we have to choose, I want it optimal for this universe, for purposes we deem good.

Comment author: timtyler 03 March 2010 10:47:32PM *  0 points [-]

A general agent is often sub-optimal on particular problems. However, it should be able to pick them up pretty quick. Plus, it is a general agent, with all kinds of uses.

A lot of people are interested in building generally intelligent agents. We ourselves are highly general agents - i.e. you can pay us to solve an enormous range of different problems.

Generality of intelligence does not imply lack-of-adaptedness to some particular environment. What it means is more that it can potentially handle a broad range of problems. Specialized agents - on the other hand - fail completely on problems outside their domain.