Comment author: Douglas_Knight3 12 December 2008 03:04:04AM 0 points [-]

Most of the variety of Eliezer's output is useful to some audience, but there's a serious problem of getting the right people to the right documents.

Comment author: Douglas_Knight3 11 December 2008 06:20:23AM 0 points [-]

Phil, I think that's how logic (or math) normally works. You make progress on logic problems by using logic, but understanding another's solution usually feels completely different to me, completely binary.

Also, it's hard to say that your unconscious wasn't working on it. In particular, I don't know if communicating logic to me is as binary as it feels, whether I go through a search of complete dead ends, or whether intermediate progress is made but not reported.

Comment author: Douglas_Knight3 08 December 2008 01:21:18AM 0 points [-]

Jed, serial speed limiting Intel makes sense, and is about the only theory I've heard that does, but now that we move to parallel machines, it seems to me that this theory predicts either that Moore's law falls apart, or that parallel software makes it possible to throw lots of money at the problem and it speeds up.

You don't have to choose one or the other, but it seems to me that you have to raise your error bars. There's an implausibly small window for the quality of parallel software to rise just fast enough to make Moore's law continue, if this is the key bottleneck.

Comment author: Douglas_Knight3 05 December 2008 02:28:49AM 0 points [-]

Didn't Robin say in another thread that the rule is that only stars are allowed to be bold? can anyone find this line?

Comment author: Douglas_Knight3 02 December 2008 08:13:25AM 0 points [-]

James Andrix: In fact that sounds like the EURISKO.

Could you elaborate? My understanding is that Eurisko never gave up, but Lenat got bored of babysitting it.

Comment author: Douglas_Knight3 01 December 2008 06:34:01AM 0 points [-]

One man's modus ponens is another's modus tolens.

I don't see that the stampede is consist with a lack of much use of buffalo. Stampedes are only inefficient if they have great variance. This might explain the conjunction of the stories of inefficient stampedes and efficient use of individual buffalo.

One theory is that farmers displace hunter-gatherers because HG have high variance yields, while farmers don't. That still requires explanation of why HG don't displace farmers in booms.

Height in the precolumbian great plains would give an easy to check to your source's claim that they were on the margins of subsistence. But even if true, that only tells us that farmers displaced HG, which we know happens. It doesn't address the question of what HG population could exist.

In response to Singletons Rule OK
Comment author: Douglas_Knight3 01 December 2008 03:00:41AM 0 points [-]

This source looks more authoritative to me. Moreover, it contains figures relevant to what I think is the key figure: miles per person. That generally trends up, from 9k in 1994 to a peak of 10k in 2005. I don't see any abrupt change in the trend. I'm rather surprised.

Comment author: Douglas_Knight3 25 November 2008 06:41:38AM 0 points [-]

Richard Hollerith, of the evidence you mention, the steadiness seems the best to me. But, as michael vassar worries, the data is poor quality and being read by people who want to tell a particular story.

Can you point to actual calorie-counting?

In response to Surprised by Brains
Comment author: Douglas_Knight3 24 November 2008 07:03:50AM 1 point [-]

why would a non-friendly AI not use those innovations to trade, instead of war?

The comparative advantage analysis ignores the opportunity cost of not killing and seizing property. Between humans, the usual reason it's not worth killing is that it destroys human capital, usually the most valuable possession. But an AI or an emulation might be better off seizing all CPU time than trading with others.

Once the limiting resource is not the number of hours in a day, the situation is very different. Trade might still make sense, but it might not.

Comment author: Douglas_Knight3 13 November 2008 02:34:40PM 0 points [-]

kyb, see the discussion of quicksort on the other thread. Randomness is used to protect against worst-case behavior, but it's not because we're afraid of intelligent adversaries. It's because worst-case behavior for quicksort happens a lot. If we had a good description of naturally occurring lists, we could design a deterministic pivot algorithm, but we don't. We only have the observation simple guess-the-median algorithms perform badly on real data. It's not terribly surprising that human-built lists resonate with human-designed pivot algorithms; but the opposite scenario, where the simplex method works well in practice is not surprising either.

View more: Prev | Next