Daniel I. Lewis, as I said, lists can have structure even when that structure is not chosen by a person.
"Let's say, for the sake of argument, that you get sorted lists (forwards or backwards) more often than chance, and the rest of the time you get a random permutation."
Let's not say that, because it creates an artificial situation. No one would select randomly if we could assume that, yet random selection is done. In reality, lists that are bad for selecting from the middle are more common than by random chance, so random beats middle.
If you put the right kind of constraints on the input, it's easy to find a nonrandom algorithm that beats random. But those same constraints can change the answer. In your case, part of the answer was the constraint that you added.
I was hoping for an answer to the real-world situation.
GreedyAlgorithm, yes that's mostly why it's done. I'd add that it applies even when the source of the ordering is not a person. Measurement data can also follow the type of patterns you'd get by following a simple, fixed rule.
But I'd like to see it analyzed Eliezer's way.
How does the randomness tie in to acquired knowledge, and what is the superior non-random method making better use of that knowledge?
Using the median isn't it, because that generally takes longer to produce the same result.
How would you categorize the practice of randomly selecting the pivot element in a quicksort?
Brian, if this definition is more useful, then why isn't that license to take over the term?
Carey, I didn't say it was a more useful definition. I said that Eliezer may feel that the thing being referred to is more useful. I feel that money is more useful than mud, but I don't call my money "mud."
More specifically, how can there be any argument on the basis of some canonical definition, while the consensus seems that we really don't know the answer yet?
I'm not arguing based on a canonical definition. I agree that we don't have a precise definition of intelligence, but we do have a very rough consensus on particular examples. That consensus rejects rocks, trees, and apple pies as not intelligent. It also seems to be rejecting paperclip maximizers and happy-face tilers.
It seems akin to arguing that aerodynamics isn't an appropriate basis for the definition of 'flight', just because a preconceived notion of flight includes the motion of the planets as well as that of the birds, even though the mechanisms turn out to be very different.
I've never heard anyone say a planet was flying, except maybe poetically. Replace "planets" with "balloons" and it'll get much closer to what I'm thinking.
To describe the universe well, you will have to distinguish these signatures from each other, and have separate names for "human intelligence", "evolution", "proteins", and "protons", because even if these things are related they are not at all the same.
Speaking of separate names, I think you shouldn't call this "steering the future" stuff "intelligence." It sounds very useful, but almost no one except you is referring to it when they say "intelligence." There's some overlap, and you may feel that what you are referring to is more useful than what they are referring to, but that doesn't give you license to take over the word.
I know you've written a bunch of articles justifying your definition. I read them. I also read the articles moaning that no one understands what you mean when you say "intelligence." I think it's because they use that word to mean something else. So maybe you should just use a different word.
In fact, I think you should look at generalized optimization as a mode of analysis, rather than a (non-fundamental) property. Say, "Let's analyze this in terms of optimization (rather than conserved quantities, economic cost/benefit, etc.)" not, "Let's measure its intelligence."
In one of your earlier posts, people were saying that your definitions are too broad and therefore useless. I agree with them about the broadness, but I think this is still a useful concept if it is taken to be a one of many ways of looking at almost anything.
Cat Dancer, I think by "no alternative," he means the case of two girls.
Of course the mathematician could say something like "none are boys," but the point is whether or not the two-girls case gets special treatment. If you ask "is at least one a boy?" then "no" means two girls and "yes" means anything else.
If the mathematician is just volunteering information, it's not divided up that way. When she says "at least one is a boy," she might be turning down a chance to say "at least one is a girl," and that changes things.
At least, I think that's what he's saying. Most of probability seems as awkward to me as frequentism seems to Eliezer.
"How many times does a coin have to come up heads before you believe the coin is fixed?"
I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?" Which, in my opinion, makes no sense.
I've never been on a transhumanist mailing list, but I would have said, "Being able to figure out what's right isn't the same as actually doing it. You can't just increase the one and assume it takes care of the other. Many people do things they know (or could figure out) are wrong."
It's the type of objection you'd have seen in the op-ed pages if you announced your project on CNN. I guess that makes me another stupid man saying the sun is shining. At first, I was surprised that it wasn't on the list of objections you encountered. But I guess it makes sense that transhumanists wouldn't hold up humans as a bad example.
When I read these stories you tell about your past thoughts, I'm struck by how different your experiences with ideas were. Things you found obvious seem subtle to me. Things you discovered with a feeling of revelation seem pedestrian. Things you dismissed wholesale and now borrow a smidgen of seem like they've always been a significant part of my life.
Take, for example, the subject of this post: technological risks. I never really thought of "technology" as a single thing, to be judged good or bad as a whole, until after I had heard a great deal about particular cases, some good and some bad.
When I did encounter that question, it seemed clear that it was good because the sum total of our technology had greatly improved the life of the average person. It also seemed clear that this did not make every specific technology good.
I don't know about total extinction, but there was a period ending around the time I was born (I think we're about the same age) when people thought that they, their families, and their friends could very well be killed in a nuclear war. I remember someone telling me that he started saving for retirement when the Berlin Wall fell.
With that in mind, I wonder about the influence of our experiences with ideas. If two people agree that technology is good overall but specific technologies can be bad, will they tend apply that idea differently if one was taught it as a child and the other discovered it in a flash of insight as an adult? That might be one reason I tend to agree with the principles you lay out but not the conclusions you reach.
GreedyAlgorithm,
That's actually the scenario I had in mind, and I think it's the most common. Usually, when someone does a sort, they do it with a general-purpose library function or utility.
I think most of those are actually implemented as a merge sort, which is usually faster than quicksort, but I'm not clear on how that ties in to the use of information gained during the running of the program.
What I'm getting at is that the motivation for selecting randomly and any speedup for switching to merge sort don't seem to directly match any of the examples already given.
In his explanation and examples, Eliezer pointed to information gained while the algorithm is running. Choosing the best type of selection in a quicksort is based on foreknowledge of the data, with random selection seeming best when you have the least foreknowledge.
Likewise, the difference between quicksort and other sorts that may be faster doesn't have an obvious connection to the type information that would help you choose between different selections in a quicksort.
I'm not looking for a defense of nonrandom methods. I'm looking for an analysis of random selection in quicksort in terms of the principles that Eliezer is using to support his conclusion.