Comment author: g 16 December 2008 10:03:28AM 1 point [-]

Wei Dai, singleton-to-competition is perfectly possible, if the singleton decides it would like company.

In response to Failure By Analogy
Comment author: g 18 November 2008 09:50:13AM 0 points [-]

Reasoning by analogy is at the heart of what has been called "the outside view" as opposed to "the inside view" (in the context of, e.g., trying to work out how long some task is going to take). Eliezer is on record as being an advocate of the outside view. The key question, I think, is how deep are the similarities you're appealing to. Unfortunately, that's often controversial.

(So: I agree with Robin's first comment here.)

In response to Whither OB?
Comment author: g 17 November 2008 11:10:40PM 0 points [-]

I'd suggest:

1. Existing contributors keep posting at whatever frequency they're happy with (which hopefully would be above zero, but that's up to them).

2. Also, slowly scour the web for material that wouldn't be out of place on OB. When you find some, ask the author two or three questions. (a) May we re-post this on OB? (b) Would you like to write an article for OB? (c) [if appropriate] May we re-post some of your other existing material on OB?

3. If the posting rate drops greatly from what it is now, have more open threads. (One a week, on a regular schedule?) Be (cautiously) on the lookout for opportunities to say "Would you like to turn that into an OB post?".

I'd strongly *not* suggest

4. Anything that would broaden the focus of OB much. (It already strays a little further from its notional core topic than would be my ideal.)

5. Voting.

6. Continuing Robin Hanson's quirk of deleting as many words from the title as is possible without rendering it completely unintelligible. (Or, sometimes, one more than that.) :-)

Those subjunctives in 1-3 of course assume that there are people willing to do that much work. I don't know whether there are, not least because I haven't seriously tried to estimate how much work it is.

Comment author: g 13 November 2008 10:53:06PM 0 points [-]

Richard, I wasn't suggesting that there's anything wrong with your running a simulation, I just thought it was amusing in this particular context.

Comment author: g 13 November 2008 09:36:53PM 0 points [-]

Anyone who evaluates the performance of an algorithm by testing it with random data (e.g., simulating these expert-combining algorithms with randomly-erring "experts") is ipso facto executing a randomized algorithm...

Comment author: g 13 November 2008 12:54:31AM 2 points [-]

So, the randomized algorithm isn't *really* better than the unrandomized one because getting a bad result from the unrandomized one is only going to happen when your environment maliciously hands you a problem whose features match up just wrong with the non-random choices you make, so all you need to do is to make those choices in a way that's tremendously unlikely to match up just wrong with anything the environment hands you because it doesn't have the same sorts of pattern in it that the environment might inflict on you.

Except that the definition of "random", in practice, *is* something very like "generally lacking the sorts of patterns that the environment might inflict on you". When people implement "randomized" algorithms, they don't generally do it by introducing some quantum noise source into their system (unless there's a *real* adversary, as in cryptography), they do it with a pseudorandom number generator, which precisely *is* a deterministic thing designed to produce output that lacks the kinds of patterns we find in the environment.

So it doesn't seem to me that you've offered much argument here against "randomizing" algorithms as generally practised; that is, having them make choices in a way that we confidently expect not to match up pessimally with what the environment throws at us.

Or, less verbosely:

Indeed randomness can improve the worst-case scenario, if the worst-case environment is allowed to exploit "deterministic" moves but not "random" ones. What "random" means, in practice, is: the sort of thing that typical environments are not able to exploit. This is not cheating.

Comment author: g 02 November 2008 01:06:03AM 0 points [-]

nazgulnarsil, just because you wouldn't *have* to call it a belief doesn't mean it wouldn't *be* one; I believe in the Atlantic Ocean even though I wouldn't usually say so in those words.

It was rather tiresome the way that Lanier answered so many things with (I paraphrase here) "ha ha, you guys are so hilariously, stupidly naive" without actually offering any justification. (Apparently because the idea that you should have justification for your beliefs, or that truth is what matters, is so terribly terribly out of date.) And his central argument, if you can call it that, seems to amount to "it's pragmatically better to reject strong AI, because I think people who have believed in it have written bad software and are likely to continue doing so". Lanier shows many signs of being a smart guy, but ugh.

In response to Aiming at the Target
Comment author: g 26 October 2008 07:49:16PM 0 points [-]

Vladimir, if I understand both you and Eliezer correctly you're saying that Eliezer is saying not "intelligence is reality-steering ability" but "intelligence is reality-steering ability modulo available resources". That makes good sense, but that definition is only usable in so far as you have some separate way of estimating an agent's available resources, and comparing the utility of what might be very different sets of available resources. (Compare a nascent superintelligent AI, with no ability to influence the world directly other than by communicating with people, with someone carrying a whole lot of powerful weapons. Who has the better available resources? Depends on context -- and on the intelligence of the two.) Eliezer, I think, is proposing a way of evaluating the "intelligence" of an agent about which we know very little, including (perhaps) very little about what resources it has.

Put differently: I think Eliezer's given a definition of "intelligence" that could equally be given as a definition of "power", and I suspect that in practice using it to evaluate intelligence involves applying some *other* notion of what counts as intelligence and what counts as something else. (E.g., we've already decided that how much money you have, or how many nuclear warheads you have at your command, don't count as "intelligence".)

In response to Aiming at the Target
Comment author: g 26 October 2008 07:12:39PM 1 point [-]

How do you avoid conflating intelligence with power? (Or do you, in fact, think that the two are best regarded as different facets of the same thing?) I'd have more ability to steer reality into regions I like if I were cleverer -- but also if I were dramatically richer or better-connected.

Comment author: g 17 October 2008 08:12:01AM 8 points [-]

PK, I thought Eliezer's post made at least one point pretty well: If you disagree with some position held by otherwise credible people, try to understand it from their perspective by presenting it as favourably as you can. His worked example of capitalism might be helpful to people who are otherwise inclined to think that unrestrained capitalism is obviously bad and that those who advocate it do so only because they want to advance their own interests at the expense of others less fortunate.

I agree that he's probably violating his own advice when he implies that capitalism amounts to treating "finance as ... an ultimate end".

View more: Prev | Next