Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

SilasBarta comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilasBarta 03 March 2010 09:45:48PM *  0 points [-]

Okay, let me explain it this way: when people refer to intelligence, a large part of what they have in mind is the knowedge that we (tacitly) have about a specific environment. Therefore, our bodies are highly informative about a large part (though certainly not the entirety!) of what is meant by intelligence.

In contrast, the only commonality with birds that is desired in the goal "powered human flight" is ... the flight thing. Birds have a solution, but they do not define the solution.

In both cases, I agree, the solution afforded by the biological system (bird or human) is not strictly necessary for the goal (flight or intelligence). And I agree that once certain insights are achieved (the workings of aerodynamic lift or the tacit knowledge humans have [such as the assumptions used in interpreting retinal images]), they can be implemented differently from how the biological system does it.

However, for a robot to match the utility of a human e.g. butler, it must know things specific to humans (like what the meanings of words are, given a particular social context), not just intelligence-related things in general, like how to infer causal maps from raw data.

Comment author: timtyler 03 March 2010 09:56:13PM 0 points [-]

FWIW, I'm thinking of intelligence this way:

“Intelligence measures an agent’s ability to achieve goals in a wide range of environments."

Nothing to do with humans, really.

Comment author: SilasBarta 03 March 2010 10:03:40PM 0 points [-]

Then why should I care about intelligence by that definition? I want something that performs well in environments humans will want it to perform well in. That's a tiny, tiny fraction of the set of all computable environments.

Comment author: timtyler 03 March 2010 10:28:26PM -1 points [-]

A universal intelligent agent should also perform very well in many real world environments. That is part the beauty of the idea of universal intelligence. A powerful universal intelligence can be reasonably expected to invent nanotechnology, fusion, cure cancer, and generally solve many of the world's problems.

Comment author: SilasBarta 03 March 2010 10:31:59PM 1 point [-]

Oracles for uncomputable problems tend to be like that...

Comment author: SilasBarta 03 March 2010 10:35:16PM 0 points [-]

Also, my point is that, yes, something impossibly good could do that. And that would be good. But performing well across all computable universes (with a sorta-short description, etc.) has costs, and one cost is optimality in this universe.

Since we have to choose, I want it optimal for this universe, for purposes we deem good.

Comment author: timtyler 03 March 2010 10:47:32PM *  0 points [-]

A general agent is often sub-optimal on particular problems. However, it should be able to pick them up pretty quick. Plus, it is a general agent, with all kinds of uses.

A lot of people are interested in building generally intelligent agents. We ourselves are highly general agents - i.e. you can pay us to solve an enormous range of different problems.

Generality of intelligence does not imply lack-of-adaptedness to some particular environment. What it means is more that it can potentially handle a broad range of problems. Specialized agents - on the other hand - fail completely on problems outside their domain.