Comment author: RolfAndreassen 10 November 2012 07:58:03PM 0 points [-]

Two objections to this: Firstly you have to extrapolate from the chimp-to-human range and into superintelligence range. The gradient may not be the same in the two. Second, it seems to me that the more intelligent humans are, the more "the other humans in my tribe" becomes the dominant part of your environment; this leads to increased returns to intelligence, and consequently you do get an increasing optimisation pressure.

Comment author: bsterrett 10 November 2012 09:12:07PM 1 point [-]

To your first objection, I agree that "the gradient may not be the same in the two," when you are talking about chimp-to-human growth and human-to-superintelligence growth. But Eliezer's stated reason mostly applies to the areas near human intelligence, as I said. There is no consensus on how far the "steep" area extends, so I think your doubt is justified.

Your second objection also sounds reasonable to me, but I don't know enough about evolution to confidently endorse or dispute it. To me, this sounds similar to a point that Tim Tyler tries to make repeatedly in this sequence, but I haven't investigated his views thoroughly. I believe his stance is as follows: since a human selects a mate using their brain, and intelligence is so necessary for human survival, and sexual organisms want to pick fit mates, there has been a nontrivial feedback loop caused by humans using their intelligence to be good at selecting intelligent mates. Do you endorse this? (I am not sure, myself.)

Comment author: RolfAndreassen 09 November 2012 05:27:36PM 3 points [-]

Hum. Suppose that increasing the intelligence of an AI requires a series of insights, or searches through design space, or however you want to phrase it. The FOOM then seems to assume that each insight is of roughly equal difficulty, or at least that the difficulty does not increase as rapidly as does the intelligence. But it does not seem obvious that the jump from Arbitrary Intelligence Level 2 to 3 requires an insight of equal difficulty as the jump from 3 to 4. In fact, intuitively it seems that jumps from N to N+1 are easier than jumps from N+1 to N+2. (It is not immediately obvious to me what the human intelligence distribution implies about this. We don't even know, strictly speaking, that it's a bell curve, although it does seem to have a fat middle.) If, to take a hypothetical example, each jump doubles in difficulty but gives a linear increase in intelligence, then the process won't FOOM at all - it'll go asymptotic horizontally, albeit perhaps at a level much above a genius human's. Even if the difficulty increases only linearly while granting a linear increase in intelligence, that keeps the time required for each jump constant. That doesn't rule out arbitrarily intelligent AIs, but it does mean the increase doesn't show an asymptote. (Depending on the time constant, it could even be uninteresting. If it takes the AI ten years to generate the insights to increase its IQ by one point, and it starts at 100, then we'll be waiting a while.)

Now, neither of those possibilities is especially likely. But if we take the increase in difficulty per level as x, and the increase in intelligence per level as y, and the time to the next insight as proportional to (x/y), then what reason do we have to believe that x < y? (Or, if they're roughly equal, that the constant of proportionality is small.)

Comment author: bsterrett 09 November 2012 07:30:09PM 3 points [-]

Eliezer's stated reason, as I understand it, is that evolution's work to increase the performance of the human brain did not suffer diminishing returns on the path from roughly chimpanzee brains to current human brains. Actually, there was probably a slightly greater-than-linear increase in human intelligence per unit of evolutionary time.

If we also assume that evolution did not have an increasing optimization pressure which could account for the nonlinear trend (which might be an assumption worth exploring; I believe Tim Tyler would deny this), then this suggests that the slope of 'intelligence per optimization pressure applied' is steep around the level of human intelligence, from the perspective of a process improving an intelligent entity. I am not sure this translates perfectly into your formulation using x's and y's, but I think it is a sufficiently illustrative answer to your question. It is not a very concrete reason to believe Eliezer's conclusion, but it is suggestive.

Comment author: drenan 05 November 2012 04:18:55AM 25 points [-]

Took the survey.

Did anyone else find the IQ questions to be either boringly easy or impossibly hard? I don't think I answered anything that required more than 15s of thought.

Comment author: bsterrett 06 November 2012 09:07:13PM 5 points [-]

I found that leaving a question and coming back to it was much more helpful than trying to focus on it. There were several questions that I made no progress on for a few minutes, but I could immediately solve them upon returning to them.

Comment author: faul_sname 05 November 2012 01:52:50AM 3 points [-]

Keeping it selected and ctrl-c/cmd-c ing it works.

Comment author: bsterrett 06 November 2012 09:01:40PM 1 point [-]

To clarify: this involves selecting the hyperlink text with your mouse, but not releasing your mouse button, and then copying the text while it is still selected.

"Keeping it selected" is the default behavior of the browser which does not seem to be working.

Comment author: bsterrett 06 November 2012 02:56:17PM 22 points [-]

I took the survey! Karma, please!

Never done an IQ test before. I thought it was fun! Now I want to take one of the legitimate ones.

Comment author: Eugine_Nier 02 November 2012 11:11:33PM -1 points [-]

In general, I don't think it is a sufficient test for the meaningfulness of a property to say "I can imagine a universe which has/lacks this property, unlike our universe, therefore it is meaningful."

Um, mathematics.

Comment author: bsterrett 03 November 2012 08:07:31PM 1 point [-]

I can't imagine a universe without mathematics, yet I think mathematics is meaningful. Doesn't this mean the test is not sufficient to determine the meaningfulness of a property?

Is there some established thinking on alternate universes without mathematics? My failure to imagine such universes is hardly conclusive.

Comment author: Will_Sawin 02 November 2012 07:22:06PM 1 point [-]

I did not intend to explain how i arrived at this conclusion. I'm just stating my answer to the question.

Do you think the statement "If X is just, then do X" is wrong?

Comment author: bsterrett 03 November 2012 05:40:11PM 0 points [-]

Like army1987 notes, it is an instruction and not a statement. Considering that, I think "if X is just, then do X" is a good imperative to live by, assuming some good definition of justice. I don't think I would describe it as "wrong" or "correct" at this point.

Comment author: Will_Sawin 01 November 2012 05:22:39PM 4 points [-]

The statement that the world is just is a lie. There exist possible worlds that are just - for instance, these worlds would not have children kidnapped and forced to kill - and ours is not one of them.

Thus, justice is a meaningful concept. Justice is a concept defined in terms of the world (pinned-down causal links) and also irreducibly normative statements. Normative statements do not refer to "the world". They are useful because we can logically deduce imperatives from them. "If X is just, then do X." is correct, that is:

Do the right thing.

Comment author: bsterrett 01 November 2012 09:32:18PM 2 points [-]

I am not entirely sure how you arrived at the conclusion that justice is a meaningful concept. I am also unclear on how you know the statement "If X is just, then do X" is correct. Could you elaborate further?

In general, I don't think it is a sufficient test for the meaningfulness of a property to say "I can imagine a universe which has/lacks this property, unlike our universe, therefore it is meaningful."

Comment author: bsterrett 31 October 2012 05:45:54PM 2 points [-]

I'll do 10.

What is the error-checking process? Will we fix any mistakes in our verdicts via an LW discussion after they have been gathered?

Comment author: bsterrett 26 October 2012 02:55:37PM 2 points [-]

I recently read the wiki article on criticality accidents, and it seems relevant here. "A criticality accident, sometimes referred to as an excursion or a power excursion, is the unintentional assembly of a critical mass of a given fissile material, such as enriched uranium or plutonium, in an unprotected environment."

Assuming Eliezer's analysis is correct, we cannot afford even 1 of these in the domain of self-improving AI. Thankfully, its harder to accidentally create a self-improving AI than it is to drop a brick in the wrong place at the wrong time.

View more: Prev | Next