Wei_Dai comments on [video] Robin Hanson: Uploads Economics 101 - Less Wrong

6 Post author: mapnoterritory 05 August 2012 09:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: GLaDOS 07 August 2012 05:51:06AM *  0 points [-]

In his scenario of a boss running at 21x the speed of the workers, why isn't the whole team being run at the higher speed? Does anyone understand his reasoning here?

How fast you run these employees depends on the economics of your industry, I think the idea is that coordination failure is expensive. Thus if running bosses faster than workers avoids such failures it seems more justified running bosses faster than nearly any other kind of worker. The value of good management in a large company is much higher than the productivity boost any one low level worker could acheive. He touches this when he notes that it is vital the most competent people are as high up the chain as possible.

Comment author: Wei_Dai 07 August 2012 05:52:57PM *  1 point [-]

Sorry, I think I didn't explain well enough why it doesn't make sense to me, so let me try again. In his example there are 256 workers and 64 line bosses running at 1x, and a CEO running at 21x. Why not instead have 16 workers, 4 line bosses, 1 CEO, all running at 16x, which would do the same amount of work in the same amount of time? If we assume that 21x is the maximum feasible emulation speed, it doesn't seem plausible that slowing down the workers to 1x saves enough money (compared to running them at 16x) to make up for increasing the memory requirement by 16 times.

Comment author: Xachariah 10 August 2012 12:03:56PM 0 points [-]

Theoretically you'd be running each of the 256 workers at 1,000,000x speed already. The boss goes into 21,000,000x speed, but has to pay a non-linear cost for that so you can only have one person at that speed. It would require a very particular price/speed discrimination structure to make that viable though.

The other option is that you're in a job that needs 256 different skill sets and we haven't learned how to swap out parts of people's personality yet. Eg, you're translating a book into 256 languages and each person only knows 1 language.

Although neither scenario strikes me as particularly likely.

Comment author: GLaDOS 08 August 2012 06:46:55AM *  0 points [-]

In his example there are 256 workers and 64 line bosses running at 1x, and a CEO running at 21x. Why not instead have 16 workers, 4 line bosses, 1 CEO, all running at 16x, which would do the same amount of work in the same amount of time?

Evidence suggests that coordination is hard.

That 16 workers running at 16x overseen by 4 line bosses and 1 CEO will suffer more coordination failure that can be avoided by running 256 workers at 1x with one CEO running at 21x, seems plausible and even likely if you assume these are human-like minds.

Comment author: Wei_Dai 08 August 2012 08:25:35AM 0 points [-]

That 16 workers running at 16x overseen by 4 line bosses and 1 CEO will suffer more coordination failure that can be avoided by running 256 workers at 1x with one CEO running at 21x, seems plausible and even likely if you assume these are human-like minds.

Robin's scenario is 256 workers and 64 line bosses running at 1x plus one CEO running at 21x who directly oversees the line bosses not the workers (you can check for yourself here). This offers no coordination advantages over having 16 workers, 4 line bosses and one CEO all running at 16x, as far as I can tell.

Comment author: GLaDOS 08 August 2012 09:32:49AM 0 points [-]

Sorry I misremembered the example, thank you for the correction.

Comment author: Nornagest 08 August 2012 08:15:44AM *  0 points [-]

I'm not sure about that. What I've seen of the management literature suggests that the complexity of coordination and oversight problems is strongly nonlinear on the number of workers overseen, and clocking faster would produce only linear improvements. It might still make sense to run the CEO at a faster clockspeed, since that role has to deal with additional coordination problems that aren't entirely under the company's control, but this line of thought suggests to me that smaller numbers of more individually productive workers would be more efficient overall than maximizing the workforce size.

Comment author: CarlShulman 08 August 2012 01:32:32AM *  0 points [-]

Robin may have been assuming abundant memory and scarce CPU time? I agree though that unless memory costs are very low this is a problem in the examples.

Comment author: Wei_Dai 08 August 2012 03:22:48AM 0 points [-]

Robin may have been assuming abundant memory and scarce CPU time?

He's not saving on CPU time (i.e., total number of instructions executed), but substituting more, slower processors for fewer, faster processors and also using more memory. We don't see a lot of this today. For example render farms and data centers all use essentially the fastest CPUs available. Some operations might back off a few notches from the bleeding edge in order to save money, but it's not even close to 2x much less 21x. My earlier "doesn't seem plausible" may be too strong, but I don't understand why Robin seems to be predicting this as the most likely scenario. If he has some specific reasons why the economics will likely work out this way, I'd very much like to see it.

Comment author: gwern 08 August 2012 04:58:04PM 1 point [-]

We don't see a lot of this today...it's not even close to 2x much less 21x.

We see plenty of this today. Every processor you use with multiple slower cores rather than a single core screaming at 4ghz is making the slow parallel vs fast serial tradeoff. Processor migration and power-saving modes are other examples where the tradeoff is made dynamically ARM processors are hugely abundant in embedded and mobile spaces, and the ARM design is an example of trading off CPU time for other things like reduced transistor count or (especially) power consumption. ARM or Atom chips are making inroads into datacenters because power consumption & cooling are becoming such issues, and we can expect parallelisation to continue for power saving.

Comment author: Wei_Dai 10 August 2012 07:30:00PM 1 point [-]

Hmm, apparently my knowledge of server hardware was a bit outdated. ARM processors being used by data centers are running at about 1.5 ghz, and it looks like with extreme overclocking it's possible to push x86 processors up to 8 ghz which gives a factor of about 5x. So probably there will be some significant difference between the fastest and slowest uploads, and 21x may not be totally implausible.

Comment author: ema 08 August 2012 07:31:40AM 0 points [-]

One benefit of running on a lower speed is that you can interact with things farther away from you while it still seems instantaneous. although i have no idea why that would be more important for the workers than for the boss.