All of Brian_Jaress2's Comments + Replies

GreedyAlgorithm,

"If your source of inputs is narrower than 'whatever people anywhere using my ubergeneral sort utility will input' then you may be able to do better."

That's actually the scenario I had in mind, and I think it's the most common. Usually, when someone does a sort, they do it with a general-purpose library function or utility.

I think most of those are actually implemented as a merge sort, which is usually faster than quicksort, but I'm not clear on how that ties in to the use of information gained during the running of the prog... (read more)

Daniel I. Lewis, as I said, lists can have structure even when that structure is not chosen by a person.

"Let's say, for the sake of argument, that you get sorted lists (forwards or backwards) more often than chance, and the rest of the time you get a random permutation."

Let's not say that, because it creates an artificial situation. No one would select randomly if we could assume that, yet random selection is done. In reality, lists that are bad for selecting from the middle are more common than by random chance, so random beats middle.

If ... (read more)

GreedyAlgorithm, yes that's mostly why it's done. I'd add that it applies even when the source of the ordering is not a person. Measurement data can also follow the type of patterns you'd get by following a simple, fixed rule.

But I'd like to see it analyzed Eliezer's way.

How does the randomness tie in to acquired knowledge, and what is the superior non-random method making better use of that knowledge?

Using the median isn't it, because that generally takes longer to produce the same result.

How would you categorize the practice of randomly selecting the pivot element in a quicksort?

Brian, if this definition is more useful, then why isn't that license to take over the term?

Carey, I didn't say it was a more useful definition. I said that Eliezer may feel that the thing being referred to is more useful. I feel that money is more useful than mud, but I don't call my money "mud."

More specifically, how can there be any argument on the basis of some canonical definition, while the consensus seems that we really don't know the answer yet?

I'm not arguing based on a canonical definition. I agree that we don't have a preci... (read more)

0Kenny
It does not seem to me that 'intelligence' excludes ("rejects") paperclip maximizers and happy-face tilers. Does it seem to some that the limited goals of some (hypothetical) beings necessarily prevents them from being intelligent? Is this a failure of imagination, of seriously considering something that is smarter than humans but much more extremely focused in terms of its goals?
To describe the universe well, you will have to distinguish these signatures from each other, and have separate names for "human intelligence", "evolution", "proteins", and "protons", because even if these things are related they are not at all the same.

Speaking of separate names, I think you shouldn't call this "steering the future" stuff "intelligence." It sounds very useful, but almost no one except you is referring to it when they say "intelligence." There's some overlap, and y... (read more)

Cat Dancer, I think by "no alternative," he means the case of two girls.

Of course the mathematician could say something like "none are boys," but the point is whether or not the two-girls case gets special treatment. If you ask "is at least one a boy?" then "no" means two girls and "yes" means anything else.

If the mathematician is just volunteering information, it's not divided up that way. When she says "at least one is a boy," she might be turning down a chance to say "at least one is a girl," and that changes things.

At least, I think that's what he's saying. Most of probability seems as awkward to me as frequentism seems to Eliezer.

"How many times does a coin have to come up heads before you believe the coin is fixed?"

I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?" Which, in my opinion, makes no sense.

I've never been on a transhumanist mailing list, but I would have said, "Being able to figure out what's right isn't the same as actually doing it. You can't just increase the one and assume it takes care of the other. Many people do things they know (or could figure out) are wrong."

It's the type of objection you'd have seen in the op-ed pages if you announced your project on CNN. I guess that makes me another stupid man saying the sun is shining. At first, I was surprised that it wasn't on the list of objections you encountered. But I guess it makes sense that transhumanists wouldn't hold up humans as a bad example.

When I read these stories you tell about your past thoughts, I'm struck by how different your experiences with ideas were. Things you found obvious seem subtle to me. Things you discovered with a feeling of revelation seem pedestrian. Things you dismissed wholesale and now borrow a smidgen of seem like they've always been a significant part of my life.

Take, for example, the subject of this post: technological risks. I never really thought of "technology" as a single thing, to be judged good or bad as a whole, until after I had heard a great d... (read more)

Eliezer, I'm starting to think you're obsessed with Caledonian.

It's pretty astonishing that you would censor him and then accuse him of misrepresenting you. Where are all these false claims by Caledonian about your past statements? I haven't seen them.

For what it's worth, the censored version of Caledonian's comment didn't persuade me.

Larry D'Anna: Thanks, I think I understand the Deduction Theorem now.

Okay, I still don't see why we had to pinpoint the flaw in your proof by pointing to a step in someone else's valid proof.

Larry D'Anna identified the step you were looking for, but he did it by trying to transform the proof of Lob's Theorem into a different proof that said what you were pretending it said.

I think, properly speaking, the flaw is pinpointed by saying that you were misusing the theorems, not that the mean old theorem had a step that wouldn't convert into what you wanted it to be.

I've been looking more at the textual proof you linked (the cart... (read more)

I think the error is that you didn't prove it was unprovable -- all provably unprovable statements are also provable, but unprovable statements aren't necessarily true.

In other words, I think what you get from the Deduction Theorem (which I've never seen before, so I may have it wrong) is Provable((Provable(C) -> C) -> C). I think if you want to "reach inside" that outer Provable and negate the provability of C, you have to introduce Provable(not Provable(C)).

Eliezer, please don't ban Caledonian.

He's not disrupting anything, and doesn't seem to be trying to.

He may describe your ideas in ways that you think are incorrect, but so what? You spend a lot of time describing ideas that you disagree with, and I'll bet the people who actually hold them often disagree with your description.

Caledonian almost always disagrees with you, but treats you no differently than other commenters treat each other. He certainly treats you better than you treat some of your targets. For example, I've never seem him write a little di... (read more)

Unlike most of the others who've commented so far, I actually would have a very different outlook on life if you did that to me.

But I'm not sure how much it would change my behavior. A lot of the things you listed -- what to eat, what to wear, when to get up -- are already not based on right and wrong, at least for me. I do believe in right and wrong, but I don't make them the basis of everything I do.

For the more extreme things, I think a lot of it is instinct and habit. If I saw a child on the train tracks, I'd probably pull them off no matter what you... (read more)

"On the other hand, it is really hard for me to visualize the proposition that there is no kind of mind substantially stronger than a human one. I have trouble believing that the human brain, which just barely suffices to run a technological civilization that can build a computer, is also the theoretical upper limit of effective intelligence."

I don't think visualization is a very good test of whether a proposition is true. I can visualize an asteroidal chocolate cake much more easily than an entire cake-free asteroid belt.

But what about other ways for your Singularity to be impossible?

That was interesting, but I think you misunderstand time as badly as you expect us to misunderstand non-time

In regular time, the past no longer exists -- so there's no issue of whether it is changing or not -- and when we talk about the future changing, we're really referring to what is likely to happen in a future that doesn't exist yet.

A person living in a block universe could mistakenly think they have time by only perceiving the present. On the other hand, a person living in a timed universe could mistakenly think they live in a block by writing down their memories and expectations in a little diagram.

Eliezer, why doesn't the difficulty of creating this AGI count as a reason to think it won't happen soon?

You've said it's extremely, incredibly difficult. Don't the chances of it happening soon go down the harder it is?

Did Hofstadter explain the remark?

Maybe he felt that the difference between Einstein and a village idiot was larger than between a village idiot and a chimp. Chimps can be pretty clever.

Or, maybe he thought that the right end of the scale, where the line suddenly becomes dotted, should be the location of the rightmost point that represents something real. It's very conventional to switch from a solid to a dotted line to represent a switch from confirmed data to projections.

But I don't buy the idea of intelligence as a scalar value.

On average, if you eliminate twice as many hypotheses as I do from the same data, how much more data than you do I need to achieve the same results? Does it depend on how close we are to the theoretical maximum?

gwern360

Well, think about it. If I can eliminate 1/2 the remaining hypotheses, and you just 1/4, then we're dealing with exponential processes here.

Let's suppose we get 1 bit a day. If we start with 4 hypotheses, then on day 1 I have 2 left, and you have 3; day 2, I have 1 left, and you have 2; on day 3, I blow up your planet just as you finally figure out the right hypothesis. If there are 1 billion hypotheses, then I'll be able to solve it in something like 20 days, and you 49. If there are a trillion, then 30 vs. 73; if a quadrillion, 40 vs. 97...

Yeah, we'll both solve the problem, but the difference can be significant.

@billswift: Emotion might drive every human action (or not). That's beside the point. If an emotion drives you into a dead end, there's something wrong with that emotion.

My point was that if someone tells you the truth and you don't believe them, it's not fair to say they've led you astray. Eliezer said he didn't "emotionally believe" a truth he was told, even though he knew it was true. I'm not sure what that means, but it sounds like a problem with Eliezer, involving his emotions, not a problem with what he was told.

When they taught me about the scientific method in high school, the last step was "go back to the beginning and repeat." There was also a lot about theories replacing other theories and then being replaced later, new technologies leading to new measurements, and new ideas leading to big debates.

I don't remember if they explicitly said, "You can do science right and still get the wrong answer," but it was very strongly (and logically) implied.

I don't know what you were taught, but I expect it was something similar.

All this "emotion... (read more)

1neuromancer92
I understand the point you're raising, because it caught me for a while, but I think I also see the remaining downfall of science. Its not that science leads you to the wrong thing, but that it cannot lead you to the right one. You never know if your experiments actually brought you to the right conclusion - it is entirely possible to be utterly wrong, and complete scientific, for generations and centuries. Not only this, but you can be obviously wrong. We look at people trusting in spontaneous generation, or a spirit theory of disease, and mock them - rightfully. They took "reasonable" explanations of ideas, tested them as best they could, and ended up with unreasonable confidence in utterly illogical ideas. Science has no step in which you say "and is this idea logically reasonable", and that step is unattainable even if you add it. Science offers two things - gradual improvement, and safety from being wrong with certainty. The first is a weak reward - there is no schedule to science, and by practicing it there's a reasonable chance that you'll go your entire life with major problems with your worldview. The second is hollow - you are defended from taking a wrong idea and saying "this is true" only inasmuch as science deprives you of any certainty. You are offered a qualifier to say, not a change in your ideas.

Maybe I'm doing it wrong, but when I score your many-worlds interpretation it fails your own four-part test.

  1. Anticipation vs curiosity: We already had the equations, so there's no new anticipation. At first it doesn't seem like a "curiosity stopper" because it leaves everyone curious about the Born probability thing, but that's because it doesn't say anything about that. On the parts where it does say something, it seems like a curiosity stopper.

After your posts on using complex numbers and mirrors, I was wondering, "Why complex number... (read more)

Science and Eliezer both agree that evidence is important, so let's collect some evidence on which one is more accurate.

I don't really follow a lot of what you've written on this, so maybe this isn't fair, but I'll put it out there anyway:

I have a hard time seeing much difference between you (Eliezer Yudkowsky) and the people you keep describing as wrong. They don't look beyond the surface, you look beyond it and see something that looks just like the surface (or the surface that's easiest to look at). They layer mysterious things on top of the theory to explain it, you layer mysterious things on top of physics to explain it. Their explanations all have fatal flaws, your... (read more)

Hopefully Anonymous, if you think a point should be addressed, make that point.

I say Eliezer has finally dealt with the zombie issue as it deserves.

It's a silly idea that invites convoluted discussion, which makes it look sophisticated and hard to refute.

I once saw a person from Korea discover, much to her surprise, that pennies are not red. She had been able to speak English for a while and could correctly identify a stop sign or blood as red, and she had seen plenty of pennies before discovering this.

In Korea they put the color of pennies and the color of blood in the same category and give that category a Korean name.

3A1987dM
And in Hungarian they put the colour of stop signs and the colour of blood in different categories.

In arguing for the single box, Yudkowsky has made an assumption that I disagree with: at the very end, he changes the stakes and declares that your choice should still be the same.

My way of looking at it is similar to what Hendrik Boom has said. You have a choice between betting on Omega being right and betting on Omega being wrong.

A = Contents of box A

B = What may be in box B (if it isn't empty)

A is yours, in the sense that you can take it and do whatever you want with it. One thing you can do with A is pay it for a chance to win B if Omega is right. Y... (read more)

"There are some people who will, if you just tell them the Refrigerator Hypothesis, snort and say 'That's an untestable just-so story' and dismiss it out of hand; but if you start by telling them about the gaze-tracking experiment and then explain the evolutionary motivation, they will say, 'Huh, that might be right.'"

But do they actually think it's more likely to be true?

They didn't say it was impossible, they said it wasn't testable. Explain how to test it, and they don't say that. What's the problem?