whpearson comments on Concrete vs Contextual values - Less Wrong

-4 Post author: whpearson 02 June 2009 09:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 02 June 2009 01:42:05PM 0 points [-]

I'm trying to argue more than that, does my edit to the post make it clearer?

Comment author: JulianMorrison 02 June 2009 03:07:03PM -1 points [-]

A bit. You're arguing that one intelligent system, comprising smarts and knowledge, can be differently effective depending on its context. Smarts might be less effective if opposed by something smarter. Knowledge might be less effective if it's mistaken or incomplete.

So far, not controversial.

What you haven't managed to do is dent the recursive self improvement hypothesis. That is, you haven't shown that "all things aren't equal" between an AI and its improved descendant self.

Comment author: whpearson 02 June 2009 03:59:00PM 0 points [-]

To me it seems obvious from looking at the history of the earth that the world changes and what might be effective at one point is not necessarily so in the future.

Is it up to me to show that "all things aren't equal", or is it up to you to show that "all things are equal"? Whose opinion should be the default position that needs to be refuted?

I think I have given sufficient real world examples to at least make further thought into this matter worthwhile. Probably we should both try and argue the others side or something.

Comment author: orthonormal 02 June 2009 05:24:08PM 3 points [-]

Well, some things change, but the examples we have of general intelligence are all cross-domain enough to handle such change. Human beings are more intelligent than chimps; no plausible change in the environment that leaves both humans and chimps alive will result in chimps developing more optimization power than humans. The scientific community in the modern world does a better job of focusing human intelligence on problem-solving than does a hunter-gatherer religion; no change in the environment that leaves our scientists alive will allow our technology to be surpassed by the combined forces of animist tribes from the African jungles.

Comment author: timtyler 02 June 2009 06:59:15PM 1 point [-]

Repeated asteroid strikes that kill all multicellular creatures would be an example of an environmental change that prevented (or at least delayed) an intelligence explosion.

In a benign environment, nature appears to favour collecting computing elements together. The enormous modern data centres are the most recent example from a long history of intelligence deployments.

Comment author: JulianMorrison 02 June 2009 08:56:46PM -1 points [-]

"Equal" is the default - the rules are simpler. Exceptions need explanations.

Comment author: whpearson 02 June 2009 10:04:24PM *  0 points [-]

I think we might be getting too terse. I have explained some cases where the effectiveness of a collection of atoms at performing goals has a different value dependent upon the environment. We need to explain those, so our function of

intelligence func (atoms a, environment e) can't just be

intelligence func (atoms a) which would be simpler

We need the environment in there some times and we need to explain why it is in there and why not. What would justify making the equal case the default is if over the space of all environment more often than not the environment made no difference.

Comment author: orthonormal 02 June 2009 11:08:27PM 1 point [-]

What would justify making the equal case the default is if over the space of all environment more often than not the environment made no difference.

The environments we encounter are very homogeneous compared to the space of possibilities, enough so that it generally won't flip the ordering of (sufficiently different) minds by intelligence/optimization power. There's no plausible (pre-Singularity) environment in which chimps will suddenly have the technological advantage over humans, though they tie us in the case of global extinction.

Comment author: whpearson 03 June 2009 07:44:54AM 1 point [-]

Why pick chimps particularly? If there any environments where humans don't survive and things with less brain power do (e.g. bacteria, beetles) then it indicates that it is not always good to have a big brain.

Comment author: JulianMorrison 03 June 2009 01:06:08AM 0 points [-]

Intelligence in the abstract consumes experience (a much lower-level concept than either atoms or environment) and attempts to compute "understanding" - a predictive model of the underlying rules. Even very high intelligence wouldn't necessarily make a perfect model, given misleading input.

BUT

Intelligence is still a strictly more-is-stronger thing in a predictable universe. Which is what I read you as meaning by "all things being equal". Even if there is a theoretical limit on intelligence, nothing that exists comes remotely close. Even if there are confounding inputs, more intelligence will compensate better. Even if there are adverse circumstances, more intelligence will be better at predicting ahead of time and laying plans. Surprised human: lion gets lunch. Forewarned human: lion becomes a rug.

Comment author: whpearson 03 June 2009 07:53:17AM *  0 points [-]

Intelligence is still a strictly more-is-stronger thing in a predictable universe.

Edit: By definition it is, but we have to be careful with what we say is obviously more intelligent. An animal with a larger more complex brain might be said to be less intelligent than another if he can't get enough food to feed it. Because it will not be around to use its brain and steer the future.

This is why all animals brains aren't being expanded by evolution.

Comment author: JulianMorrison 03 June 2009 09:06:54AM 0 points [-]

Evolution makes trade-offs for resources. No good having a better brain you can't afford to fuel.

"Predictability" as I used the word means laws of physics that can be inferred from experience. (Versus no laws, or no usable evidence.) Other intelligences don't make the universe unpredictable.

Comment author: whpearson 03 June 2009 09:37:52AM 0 points [-]

"Predictability" as I used the word means laws of physics that can be inferred from experience. (Versus no laws, or no usable evidence.)

In order to be able to make predictions about the world it is not enough to know just the laws of physics, you have to know the current state.

It is easier to infer the state of some non-intelligences than it is intelligences.