The following is intended as 1) request for specific criticisms regarding the value of time investment on this project, and 2) pending favorable answer to this, a request for further involvement from qualified individuals. It is not intended as a random piece of interesting pop-sci, despite the subject matter, but as a volunteer opportunity.

Server Sky is a an engineering proposal to place thousands (eventually millions) of micron-thin satellites into medium orbit around the earth in the near term. It is being put forth by Keith Lofstrom, the inventor of the Launch Loop.

Abstract from the 2009 paper:

It is easier to move bits than atoms or energy.  Server­-sats are ultralight disks of silicon that convert sunlight into computation and communications.  Powered by a large solar cell, propelled and steered by light pressure, networked and located by microwaves, and cooled by black­-body radiation. Arrays of thousands of server­-sats form highly redundant computation and database servers, as well as phased array antennas to reach thousands of transceivers on the ground.

First generation server­-sats are 20 centimeters across ( about 8 inches ), 0.1 millimeters (100 microns) thick, and weigh 7 grams. They can be mass produced with off­-the­-shelf semiconductor technologies. Gallium arsenide radio chips provide intra­-array, inter­-array, and ground communication, as well as precise location information. Server­-sats are launched stacked by the thousands in solid cylinders, shrouded and vibration-­isolated inside a traditional satellite bus.


Links:

Papers and Presentations

Slide Show

Wiki Main Page

Help Wanted

Mailing List

Some mildly negative evidence to start with: I have already had a satellite scientist tell me that this seems unlikely to work. Avoiding space debris and Kessler Syndrome, radio communications difficulties (especially uplink), and the need for precise synchronization are the obstacles he stressed as significant. He did not seem to have studied the proposal closely, but this at least tells us to be careful where to set our priors.

On the other hand, it appears Keith has given these problems a lot of thought already, and solutions can probably be worked out. The thinsats would have optical thrusters (small solar sails) and would thus be able to move themselves and each other around; defective ones could be collected for disposal without mounting an expensive retrieval mission, and the thrusters would also help avoid things in the first place. Furthermore the zone chosen (the m288 orbit) is relatively unused, so collisions with other satellites are unlikely. Also the satellites have powerful radar capabilities, which should lead to more easily detecting and eliminating space junk.

For the communications problem, the idea is to use three dimensional phased arrays of thinsats -- basically a bunch of satellites in a large block working in unison to generate a specific signal, behaving as if they were a much larger antenna. This is tricky and requires precision timing and exact distance information. The array's physical configuration will need to be randomized (or perhaps arranged according to an optimized pattern) in order to prevent grating lobes, a problem with interference patterns that is common with phased arrays. They would link with GPS and each other by radio on multiple bands to achieve "micron-precision thinsat location and orientation within the array".

According to the wiki, the most likely technical show-stopper (which makes sense given the fact that m288 is outside of the inner Van Allen belt) is radiation damage. Proposed fixes include periodic annealing (heating the circuit with a heating element) to repair the damage, and the use of radiation-resistant materials for circuitry.

Has anyone else here researched this idea, or have relevant knowledge? It seems like a great potential source of computing power for AI research, mind uploads, and so forth, but also for all those mundane, highly lucrative near term demands like web hosting and distributed business infrastructures.

From an altruistic standpoint, this kind of system could reduce poverty and increase equitable distribution of computing resources. It could also make solving hard scientific problems like aging and cryopreservation easier, and pave the road to solar power satellites. As it scales, it should also create demand (as well as available funding and processing power) for Launch Loop construction, or some other similarly low-cost form of space travel.

Value of information as to whether it can work or not therefore appears to be extremely high, something I think is crucial for a rationalist project. If it can work, the value of taking productive action (leadership, getting it funded, working out the problems, etc.) should be correspondingly high as well.


Update: Keith Lofstrom has responded on the wiki to the questions raised by the satellite scientist.

Note: Not all aspects of the project have complete descriptions yet, but there are answers to a lot of questions in the wiki.

Here is a summary list of questions raised and answers so far:

  • How does this account for Moore's Law? (kilobug)
    In his reply to the comments on Brin's post, Keith Lofstrom mentions using obsolete sats as ballast for much thinner sats that would be added to the arrays as the manufacturing process improves. Obsolete sats would not stay in use for long.
  • What about ping time limits? (kilobug)
    Ping times are going to be limited (70ms or so), and worse than you can theoretically get with a fat pipe (42ms), but it is still much better than you get with GEO (250+ ms). This is bad for high frequency trading, but fine for (parallelizable) number crunching and most other practical purposes.
  • What kind of power consumption? Doesn't it cost more to launch than you save? (Vanvier)
    It takes roughly  2 months for a 3 gram thinsat to pay for the launch energy if it gets 4 watts, assuming 32% fuel manufacturing efficiency. Blackbody cooling is another benefit.
  • Bits being flipped by cosmic radiation is a problem on earth, how can it be solved in space? (Vanvier)
    Flash memory is acknowledged to be the most radiation sensitive component of the satellite. The solution would involve extensive error correction software and caching on multiple satellites.
  • Periodic annealing tends to short circuits. Wouldn't this result in very short lifetimes? (Vanvier)
    Circuits will be manufactured as two dimensional planes, which don't short as easily. Another significant engineering challenge: Thermal properties in the glass will need to be matched with the silicon and wires (for example, slotted wiring with silicon dioxide between the gaps) to prevent circuit damage. Per Vanvier, it may be less expensive to replace silicon with other materials for this purpose.
  • What are the specific of putting servers in space? (ZankerH)
    Efficient power/cooling, increased communications, overall scalability, relative lack of environmental impact.

Yet to be answered:

  • Is the amount of speculative tech too high? E.g. if future kinds of RAM are needed, costs may be higher. (Vanvier)
  • Is it easier to replace silicon with something else than find ways to make the rest of the sat match thermal expansion of silicon? (Vanvier)
  • Can we get more data on economics/business plan? (Vanvier)
  • Solar sails have been known to stick together. Is this a problem for thinsats, which are shipped stuck together? (Vanvier)
  • Do most interesting processes bottleneck on communication efficiency? (skelterpot)
  • What decreases in cost might we see with increased manufacturing yield? (skelterpot)

Insightful comments:

  • Launch energy vs energy collection (answer above is more specific, but this was a commendable quick-check).  (tgb)
  • ECC RAM is standard technology used in server computers. (JoachimShipper)
  • Fixing bit errors outside the memory (e.g. in CPU) is harder, something like Tandem Computers could be used, with added expense. (JoachimShipper)
  • Some processor-heavy computing tasks, like calculating scrypt hashes, are not very parallelizable. (skelterpot)
  • Other approaches like redundant hardware and error-checking within the CPU are possible, but they drive up the die area used. (skelterpot)

New to LessWrong?

New Comment
34 comments, sorted by Click to highlight new comments since: Today at 11:24 PM

What makes this more economical than running computation on the ground? The only benefit I saw that looks like a benefit is that the cooling is done by black-body radiation, but cooling is a mostly solved problem, right? (I expect it will take more energy to put into orbit than the solar panels will accumulate over their lifetime.)

According to the wiki, the most likely technical show-stopper (which makes sense given the fact that m288 is outside of the inner Van Allen belt) is radiation damage. Proposed fixes include periodic annealing (heating the circuit with a heating element) to repair the damage, and the use of radiation-resistant materials for circuitry.

What about memory? Bits being flipped by cosmic radiation is an issue on Earth; I imagine it must be more significant in space, and annealing won't fix that.

As well, periodic annealing eventually results in your circuit no longer being a circuit, as the wires and capacitors have diffused until there's a short. You might be able to build these with a large enough heat budget that you can get a reasonable number of reheats out of it, but the lifespan is going to be fairly short.

(I expect it will take more energy to put into orbit than the solar panels will accumulate over their lifetime.)

This struck me as an interesting estimate so here's my attempt at checking it:

Wikipedia quotes 300W/kg solar cells. Medium Earth Orbit ranges from over 2,000km to 35,000km above sea level - let's pick a fairly low estimate of 6000km. Gravitational potential is 3.32 J for elevating a 1kg mass to 6000km from sea level. So the solar panel must operate for 1.28 days to recoup the energy cost of elevating the object. This is, of course, a lower bound (assuming perfect launch mechanism, no kinetic energy of orbit, etc. etc.), but it seems unreasonable to assume that launching solar panels has no benefit given this tiny lower bound. Furthermore, the fact that solar panels are routinely launched into orbit suggests that they do have a net energy production.

What about memory? Bits being flipped by cosmic radiation is an issue on Earth; I imagine it must be more significant in space, and annealing won't fix that.

What do existing computers-in-space do? Shielding of some sort?

So the solar panel must operate for 1.28 days to recoup the energy cost of elevating the object

The object- what about the rocket? (I also should have included the energy cost of making the solar panel in the first place, which tends to seriously reduce their attractiveness.)

Furthermore, the fact that solar panels are routinely launched into orbit suggests that they do have a net energy production.

Well, solar is cheap to get to space. (I know our recent Mars rover is using nuclear energy (powered by decay, not fission or fusion) rather than solar panels to reduce the impact of Mars dust, and that deep space probes used similar technology because solar radiance decreases the further away you get.) Batteries in particular are pretty heavy- and so solar panels probably represent the most joules per kilogram in Earth's orbit.

But the comparison isn't "solar in space" vs. "chemical in space", it's "solar in space" vs. "anything on earth". The idea of "let's put computers out in space, where the variable cost of running them is zero" misses that the fixed cost of putting them in space is massively high, probably to the point where it eats any benefit from zero variable cost.

That is, this technology looks cool but I don't yet see the comparative advantage.

What do existing computers-in-space do? Shielding of some sort?

Check out the wiki page on radiation hardening. I believe that the primary thing to do with cosmic rays is just noticing when they happen and fixing the flip. I think it's a mostly solved problem, but that the hardware / software is slightly more expensive because of that. (Buying RAM with ECC appears to be difficult for general consumers, but I imagine it's standard in the satellite industry.)

(powered by decay, not fission or fusion)

Isn't decay a subset of fission? (Excluding things like lone protons that don't technically have a nucleus or whatever.)

Yeah, that was sloppy of me. I meant to specify that it was spontaneous fission rather than chain reaction fission.

The term 'fission' is generally reserved for daughter species of vaguely similar mass. Decays are generally alpha (He-4) or beta (electron and neutrinos), maybe with some others mixed in.

ECC RAM is standard for servers, so it's not especially hard to get. Fixing bit errors outside the memory (e.g. in CPU) is harder; I imagine something like http://en.wikipedia.org/wiki/Tandem_Computers, essentially running two computers in parallel and checking them against one another, would work. But all of this drives the cost up, which, as you note, is already a problem.

There are other clever things you can do, like including redundant hardware and error-checking within the CPU, but they all drive up the die area used. Some of this stuff might be able to actually drive down cost by increasing the manufacturing yield, but in general, it will probably be more expensive.

You seemed to have missed my sentence between the two that you quoted:

This is, of course, a lower bound (assuming perfect launch mechanism, no kinetic energy of orbit, etc. etc.), but it seems unreasonable to assume that launching solar panels has no benefit given this tiny lower bound.

My point was that even if the launch is only 0.1% efficient at moving solar cells into space, you're looking at more than recouping the cost of the launch in the putting the solar panel up. If you think the launch is much less than 0.1% efficient, I'd be interested in hearing why you think that. They might actually be that inefficient, but I would be hesitant to assume such without having a reason to do so.

Now that lsparrish has posted a link to a better discussion of the subject, my post is more or less obsolete.

But the comparison isn't "solar in space" vs. "chemical in space", it's "solar in space" vs. "anything on earth".

I agree and was not trying to say that this plan was practical - I do not believe it is. I was just pointing out that something you stated as true doesn't appear to be so from a very quick look at the numbers.

The idea of "let's put computers out in space, where the variable cost of running them is zero" misses that the fixed cost of putting them in space is massively high

Sure. That's why they would have to be very lightweight for this to work.

(I expect it will take more energy to put into orbit than the solar panels will accumulate over their lifetime.)

This is answered in the wiki: it takes roughly 2 months for a 3 gram thinsat to pay for the launch energy if it gets 4 watts, assuming 32% fuel manufacturing efficiency. The blackbody cooling is a significant reason as well. (Note: The 7 gram estimate given in the paper is slightly out of date -- the wiki describes 3 grams as the current target.)

What about memory? Bits being flipped by cosmic radiation is an issue on Earth; I imagine it must be more significant in space, and annealing won't fix that.

This is discussed as well, albeit briefly:

"The most radiation sensitive components are likely to be the flash memory. These incorporate error correction, but software error correction and frequent rewrites may be necessary to correct for radiation-induced charges. Some errors may need to be restored from caches on other thinsats partway around the orbit."

So it looks like he is thinking of a combination of redundancy and memory-repair algorithms.

As well, periodic annealing eventually results in your circuit no longer being a circuit, as the wires and capacitors have diffused until there's a short. You might be able to build these with a large enough heat budget that you can get a reasonable number of reheats out of it, but the lifespan is going to be fairly short.

This is also somewhat mentioned in the manufacturing section, where the concern is that differentials in material thermal properties could cause damage.

"The vast bulk of the material, and the largest pieces of of the thinsat, will be laminated engineering glass and metal. Since the thinsat undergoes wide temperature changes when it passes in and out of shadow, or undegoes thermal annealing, it will be more survivable if the glass can match silicon's 2.6E-6/Kelvin coefficient of thermal expansion (CTE). Metals have very high CTEs, while SiO2 has a very low CTE, so slotted metal wires with SiO2 in the gaps is one way to make a "material" that is both conductive and has the same CTE as silicon."

Also there is the fact the wires and capacitors are going to be all two dimensional in nature. My guess is that not all of the same assumptions necessarily apply in this situation as do for three dimensional wires and capacitors.

Thanks for the more direct links. I'm starting to update in favor of this working, but I'm still bothered by the amount of speculative tech involved. (If we're going to use new RAM coming out in a few years that'll be cheaper/faster/less error prone, our comparison needs to not be to current tech / costs, but to tech / costs after that new RAM has been integrated.)

I suspect it'll be easier to replace silicon than to get the rest of the thinsat to match the thermal expansion of silicon, but that suspicion is rooted in professor friends who do semiconductor research, not industry, so the costs there might be way higher.

This page was the only thing I could find on the economics (he mentions elsewhere he wants to keep the business plan private).

Another thing to think about: have we sent up stacked things to space like this before, and managed to disengage them from each other? I believe a number of solar sails have failed to unfold correctly, and so there might be a similar problem here. Thankfully, they don't need to be attached to each other, like solar sails do, but now it's a problem if they do get attached to each other, and I don't know which of those is a more difficult engineering problem. (The only description I saw of that on the wiki was 'peeling' them apart.)

On the surface, this sounds to me like a "Space, because space" project. What are the specific benefits to having our servers in orbit as opposed to on Earth?

Here's my short list:

  • Black-body cooling (2.7 Kelvin)
  • Efficient solar (very short night, no atmosphere)
  • Telecommunications (m288 is much faster than GEO)
  • Scalability (space is big)
  • Environment (earth has habitats we don't want to destroy, but probably will if it is the most cost-efficient way to meet consumer demand for processing power)

Look at the first few pages of "the 2009 paper".

Popular science has nothing to do with rationality.

Could you elaborate? That sounds trivially false the way I'm reading it.

Posting interesting bits of new science to LessWrong won't helps any of us become more rational. Indeed it will have a net negative effect, since it is distracting and it dilutes LessWrong's other content. While it's rational to pay attention to new science this is an outcome of rationality and nothing to do with rationality itself (just as it's rational to eat nice food where possible, but recommendations of good restaurants aren't appropriate to LessWrong).

Clearly it's fine to use bits of pop-sci to illustrate rationality concepts, and of course the scientific method is of great interest to us. But posting random bits of stuff to LessWrong is bad if they don't bear on rationality. No matter how interesting they are.

I think characterizing this as post as pop-sci (or "random" for that matter) is highly misleading. The level on which it pattern-matches to that category of things is superficial.

This is actually a request for involvement and a calculation of altruistic benefit on a matter that requires some technical knowledge to evaluate. If you were to explicitly argue that time spent on this proposal is unlikely to have a very favorable utility, that's the kind of information I'm looking for. Interesting is not the point.

Secondly, scientific and engineering challenges are a good way to improve important rationality skills. There are certain considerations that need to be met for this to happen, though, for a topic to be useful in this regard:

  • It has to be something where most of the necessary information exists so that a fairly coherent Aumann agreement should be reachable. (The hazier the data, the more likely it is to split into a Green/Blue divide, or worse, unanimous groupthink.) Relatedly, it needs to be within the technical ability-to-grasp of the audience.
  • Value of information must at least appear to be high. (Arguments that it is lower than it appears can be revelatory.)
  • There needs to be some difference between mainstream opinion/the easy answer and the opinion that the rationalist is arguing for (otherwise you end up with an exercise in conformity which doesn't actually change any minds).

Finally, while you are making a point I agree with regarding random bits of pop-sci and their lack of a place on LessWrong, please note that by posting it as a reply to my post you are inferentially transferring the properties "random" and "pop-sci" to the article. These are properties that I don't ascribe to it. While I can see the pattern matching that led to the conclusion, it isn't as easy to respond to as if you had made the connection in a more explicit manner.

In any case, I do accept responsibility for the writing style issues that led to your reaction. Will attempt to fix.

This is actually a request for involvement and a calculation of altruistic benefit on a matter that requires some technical knowledge to evaluate.

You're right. I skim read your article and thought it was just about the project, rather than a request for help. I apologise. Oops.

I'm still not sure whether I approve of this kind of post either, I'd much rather that things had a direct bearing on rationality. But I won't continue to drag on with that discussion here.

This would probably be sufficiently relevant for Discussion, but strikes me as not being so relevant to main (per Oscar_Cunningham below).

I hadn't noticed that the original post was in main!

The Lofstrom Loop might be hamstrung by the problem of The Political Economy of Very Large Space Projects. http://www.jetpress.org/volume4/space.htm

Prototyping ServerSky is quite the opposite, however: it's going to become much easier. And if prototyping it shows a clear-cut business case, the decision to launch ServerSky becomes only net-present-value computation, not a problem of political economy. If the Lofstrom Loop proves to be the most cost-effective way to scale ServerSky (and maintain it), it'll just happen.

Here's how I think prototyping ServerSky could become easier: Zac Montgomery at Cornell announced a Kickstarter project last year called KickSat: a CubeSat that sprays chipsats (Sprites). He met his funding target quite handily -- in fact surpassed it by a fairly wide margin. I think KickSat will be launched within the next year or so, courtesy of NASA's ELaNa program.

http://brad.denby.me/starblog/2012/02/

Zac's current emphasis is on proof-of-concept: he wants to just get a launch, and show that ground communication with customized chipsats can be established. The demonstration effect might be quite dramatic. With luck, it could become a global R&D phenomenon roughly paralleling the "mainstreaming" of VLSI design in EECS departments, as inspired by Carver Mead and Lynn Conway, back in the late 70s and early 80s.

Zac's a mere grad student, of course. But the Cornell prof who was leading up his group, Mason Peck (himself a pioneer of the chipsat concept), has since moved into NASA as Chief Technologist, replacing Bobby Braun. So I expect to see significant ferment on the chipsat front. Testing the ServerSky concept any time soon (soon enough to reduce terrestrial energy consumption, anyway) will probably depend on global, massively parallel concurrent engineering and testing of real hardware under real space conditions. I think the financial barrier to such an effort is about to drop to a point where even engineering students in the developing world can consider participation in projects that actually go to orbit. Out of such a milieu, the chance that some very useful innovation arises as an unintended consequence seems high enough to make the effort worthwhile, regardless of whether ServerSky itself hits a showstopper. The Mead-Conway design movement had many flaws, but the first RISC on a chip was one of its early successes; if I'm not mistaken, the ARM chip in your smartphone has a heritage tracing back to an ISI multi-project chip design at Carver Mead's home university, Cal Tech.

I've written on the subject of whether there's anything like a Moore's Law for space launch. I don't think there is one -- launch is governed more by something like Moore's Second Law (the cost of fab lines keeps growing fast), only with a much bigger problem of establishing the Killer App for commercialization.

http://www.thespacereview.com/article/180/1

But I think Moore's Law can still be a driver for space development. New products and services made possible by VLSI scaling (both on spacecraft and on Earth) could result in higher launch rates, which could improve the economies of scale in launcher production and launch operations, and maybe eventually trigger funding in radical space launch technologies -- maybe even Launch Loop, among other ideas. ServerSky is one scenario for "new class of commercial space value propositions". I consider it plausible, at least, if not yet probable.

ChipSats seem kind of impractical on the surface of things. It's paying for itself basically by being a novelty and tapping into enthusiasm. Probably not very scalable.

On the other hand if a relatively low-scale space based computing framework could be developed that can do lots of parallel processing, it could probably pay for itself. One idea I've been playing with to do so would be to use it to mine bitcoins. This is something where things like stability aren't all that critical. If you were to use a low-earth orbit calculated to last a few months, it might be enough to pay for the hardware and launch.

Bitcoin mining cost to dollar ratio may change over time, so ROI calculations would be approximate at best. However, mining difficulty level tends to boost the value of bitcoins, and devaluation of bitcoins tends to reduce mining because they have to cover power costs. Free power/heat radiation should generally grant a competitive advantage all else equal.

I have reservations about BitCoin, but mainly on the monetary policy level. Inflation sometimes has its uses in economic policy, and deflation can sometimes be a lot more disastrous than inflation. I think BitCoin, by capping the total amount of money that can possibly circulate, would lend itself to liquidity traps - a deflationary spiral.

However, the idea of "computationally mining the sky" (not just using solar energy, but also using the cosmos as a heat sink) is positively brilliant. Perhaps the only question is: when will its time come?

To be clear: I wasn't proposing KickSat as a business model. That never even occurred to me. For one thing, immediate practicality seems like an awful lot to ask at this point. If the goal is massive computing power in orbit, KickSat isn't going to deliver practicality. (I've got a Sprite coming, and when it arrives, I'll be writing code for a slow 16-bit microprocessor for the first time in perhaps 25 years.)

The more important goal at this point is obvious: to get more reality-testing of the ServerSky idea. We can get more reality-testers if there's an increase in awareness of the idea -- which is why you're posting this notice to Less Wrong, yes? Thinsats seem like an absurd idea on the face of it. But the same could have been said of the planar transistor before it happened. As soon as you had one planar transistor, people started thinking about it the whole idea a lot more. That first planar transistor wasn't a commercially practical device. Probably the first few thousand fabricated at Bell Labs weren't practical. What drove commercialization efforts at that point wasn't profit, it was promise -- a promise made more credible by a physical realization.

Call it "hardware as propaganda", if you like, but most people don't really believe in what they haven't first seen. Less Wrong isn't "most people" -- it partakes of a strong speculative mindset. Please understand -- I like powerful speculation, personally. But it's hardly representative of what makes most people invest time and effort, much less money, in an idea. Talking about planar transistors didn't put the word "germanium" on people's lips. Making one did. It turned out silicon was really the ticket. But if a germanium planar transistor hadn't gotten people saying something, nothing would have happened.

If I had to pick a key phrase in what I posted above, it's "demonstration effect." At this point, it seems the best way to physically demonstrate anything remotely like ServerSky is with a Sprite fleet.

There is something that feels "wrong" for me in this proposal, it's that it doesn't account for Moore's law. Sending objects to space is expensive (even with a launch loop or a space elevator, it would still be expensive, even if much less than with traditional rockets), so you can't renew the "server sky" every few years. But with Moore's law, computers a few years ahead are much more powerful than the computers of now. Launch "server sky" now, and with Moore's law, in 10 years, we can make servers 32 times faster... but we can't change the ones in orbit.

Other problem is the ping : with a distance of 12789 km, you get at best a ping 85ms, assuming no other delay and that the nearest satellite can answer you directly, which will rule out many possible usages.

In a recent reply to the comments on Brin's post, Keith Lofstrom mentions using obsolete sats as ballast for much thinner sats that would be added to the arrays as the manufacturing process improves.

He acknowledges that ping times are going to be limited, and lower than you can theoretically get with a fat pipe, but it is still much better than you get with GEO.

The m288 central orbit can be seen at 58 degrees north and south latitude, at a distance of 10500 km. The round trip ping time is 70 milliseconds. The ground ping time through optical fiber across the United States is faster in theory, but ground networks are slowed by switches and indirect routes. Ping times from fat-pipe servers in Dallas Texas to mit.edu are 42 milliseconds , and to orst.edu are 49 milliseconds, so 70msec is not way out of line. However, much of the routing will travel "around the cloud", and without local caching in the "near" links, some pings may need as much as 200 milliseconds to hop from the far side of the orbit. Still, this is better than the 250+ millisecond ping time through a geosynchronous satellite.

For lots of processor-heavy things (mining bitcoin, rendering animations, what have you) it isn't especially crucial. High frequency stock trading is probably out.

For lots of processor-heavy things (mining bitcoin, rendering animations, what have you) it isn't especially crucial.

The key thing about those isn't that they're processor heavy; it's that they're very parallelizable, and have minimal data dependencies between subtasks. For an example of something that isn't like this, calculating scrypt hashes is very processor-heavy, but is provably Hard to parallelize.

I suspect that most interesting calculations will bottleneck on communication latency.

Why not build data centers around Earth so that there is always solar power available to some of them (and you perhaps get to use the waste heat for something useful like heating homes), and then make the data centers available everywhere on Earth via satellites or something?

The bottom line is that servers on earth is not as scalable. Solar isn't as available on earth, and trying to tap into what is available will probably lead to massive habitat destruction. Heating homes with the waste heat makes the heat harder to remove (you have to pump heat, which produces heat), and would require infrastructure that is costly to build (e.g. water pipes). Space is colder than liquid helium, though it does not directly conduct heat (you have to use black-body radiation).

For the communications problem, the idea is to use three dimensional phased arrays of thinsats -- basically a bunch of satellites in a large block working in unison to generate a specific signal, behaving as if they were a much larger antenna. This is tricky and requires precision timing and exact distance information. The array's physical configuration will need to be randomized (or perhaps arranged according to an optimized pattern) in order to prevent grating lobes, a problem with interference patterns that is common with phased arrays. They would link with GPS and each other by radio on multiple bands to achieve "micron-precision thinsat location and orientation within the array".

Sounds like a good application for that vortex radio thing that basically only works in freespace.

The biggest is solar flares and coronal ejections. Not your normal day to day solar winds and stuff, but honking great gouts of energy and/or coronal mass that are flung out.

Another is comms. How many people would use each server? Really? What's the bandwidth divided by users? From the ground going up you either need a well aimed dish or a honking lot of power (or a shitload of antenna topside)

Third is Putting lots of these up isn't a wonderful idea as they will obsolete quickly and need to be replaced. You then have the choice to de-orbit them (wasteful) or leave them up there (danger to navigation).

Fourth, fifth and sixth are security, security and security. How do you apply a security patch to something Up There. Yes, it can be done, but what happens if you Brick It. Back to more engineering and more redundancy and etc. Up goes the cost, up goes the size, down goes the payback. How do you prevent eavesdropping on your communications with it? And with it's communication with you? Encryption? that's either more CPU cost, or more payload to use special processors. How do you protect it from deliberate interference from various organisations that wish to compromise it. etc. etc.

While the lure of "free" energy (or more accurately low cost, reliable energy) is compelling, there's a LOT of ways to get many of those benefits terrestrially. The notion of something like this "reducing poverty" is silly. While there is a lot that processing power can do to reduce poverty, access to raw computing resources ISN'T one of them. You'd be better off deploying something like wifi to GSM connections and building effective mesh network protocols. A for various kinds of research there's plenty of unused power here on this planet. See this: https://flowcharts.llnl.gov/content/energy/energy_archive/energy_flow_2010/LLNLUSEnergy2010.png If you really want to solve a useful problem in this area turning that "rejected energy" into CPU cycles. You've got about 26 Quads of waisted energy JUST from electrical distribution.

That's a metric buttload of entropy that could be put to work solving problems if we could figure out how to get to it. CPU cycles that are close, easily upgradeable, easily defendable, easily recoverable when they die or get obsolesced.

(I have this pet notion that landfills are GOOD places to put stuff we can't use right now, because some day some smart bloke is going to figure out how to use nanotech of some kind to (diamond age style) sort the component bits right out and we'll KNOW where all the high density sources are because we'll have been dumping our old crud in there for a hundred years. Now if we can just keep the crap out of our ground water....)

Overall it can be done, and if you want to deliver compute resources to station owners or Aboriginal Communities in the outback, or nomadic tribes in the desert regions of the world it might be cost effective. But those folks would need a local computer and network to access it, so why not just give them a slightly beefier lapdog and use some sort of distributed compute engine?

I have a tendancy to agree with Mr. Gerard and Mr. Cunningham on this though--while it's an interesting technical exercise and I (clearly) don't mind talking about it, it's not the sort of thing I come here for.

Oh, I thought it was fine for discussion as popular science with slight local relevance.

If someone with direct expertise on the effects of coronal mass ejections and/or solar flares could comment, that would be good. It sounds like it could cause blackouts every so often, if not damage. Note use of Gallium Arsenide electronics to minimize radiation damage.

Comms question is discussed here and here. Needs more input from radio specialists.

Obsolescence discussed here, here, and here. Obsolete thinsats should be useful as ballast for future deployments and as radiation shields, as long as control can be maintained.

Bricking a single sat wouldn't be too costly. Bricking the whole fleet would be. So patches should be applied in a relatively piecemeal fashion. External attacks are a problem that needs more discussion, I think. Encryption (probably in hardware) does add to the cost, but is probably worth it.

On reducing poverty: My mental model is that anything that boosts the economy and makes business transactions happen more easily in a generalized fashion (i.e. one that is not dramatically favorable to particular monopolizing agents) is going to reduce poverty. It is a matter of increased employment and decreased costs.

While computer distribution and mesh net access have (I think) high potential for helping people in extreme poverty to do more business and education, there's something to be said for a super-powerful, routinely upgraded computer based in the sky where it can't easily be stolen or broken by local thugs. Also, the relevant utility calculation isn't only a matter of reducing extreme poverty. Unlike mesh networks and so forth this would directly benefit middle class people as well, e.g. millions could cancel their internet subscriptions, start hosting computing-intense personal projects for near-free, and stop upgrading their computers so frequently.

On topicality: I think the disconnect many are feeling between this topic and LessWrong is essentially a feature of the map, not the territory. Rationality is all about solving problems, including the problem of what problems to decide to work on. It is important to realize that when a problem seems too far-mode to consider and scrutinize rationally, that is essentially a feature of your skills and instincts, not the problem itself. (Perhaps I ought to write a top level post about that.)