The following is intended as 1) request for specific criticisms regarding the value of time investment on this project, and 2) pending favorable answer to this, a request for further involvement from qualified individuals. It is not intended as a random piece of interesting pop-sci, despite the subject matter, but as a volunteer opportunity.
Server Sky is a an engineering proposal to place thousands (eventually millions) of micron-thin satellites into medium orbit around the earth in the near term. It is being put forth by Keith Lofstrom, the inventor of the Launch Loop.
Abstract from the 2009 paper:
It is easier to move bits than atoms or energy. Server-sats are ultralight disks of silicon that convert sunlight into computation and communications. Powered by a large solar cell, propelled and steered by light pressure, networked and located by microwaves, and cooled by black-body radiation. Arrays of thousands of server-sats form highly redundant computation and database servers, as well as phased array antennas to reach thousands of transceivers on the ground.
First generation server-sats are 20 centimeters across ( about 8 inches ), 0.1 millimeters (100 microns) thick, and weigh 7 grams. They can be mass produced with off-the-shelf semiconductor technologies. Gallium arsenide radio chips provide intra-array, inter-array, and ground communication, as well as precise location information. Server-sats are launched stacked by the thousands in solid cylinders, shrouded and vibration-isolated inside a traditional satellite bus.
Links:
Some mildly negative evidence to start with: I have already had a satellite scientist tell me that this seems unlikely to work. Avoiding space debris and Kessler Syndrome, radio communications difficulties (especially uplink), and the need for precise synchronization are the obstacles he stressed as significant. He did not seem to have studied the proposal closely, but this at least tells us to be careful where to set our priors.
On the other hand, it appears Keith has given these problems a lot of thought already, and solutions can probably be worked out. The thinsats would have optical thrusters (small solar sails) and would thus be able to move themselves and each other around; defective ones could be collected for disposal without mounting an expensive retrieval mission, and the thrusters would also help avoid things in the first place. Furthermore the zone chosen (the m288 orbit) is relatively unused, so collisions with other satellites are unlikely. Also the satellites have powerful radar capabilities, which should lead to more easily detecting and eliminating space junk.
For the communications problem, the idea is to use three dimensional phased arrays of thinsats -- basically a bunch of satellites in a large block working in unison to generate a specific signal, behaving as if they were a much larger antenna. This is tricky and requires precision timing and exact distance information. The array's physical configuration will need to be randomized (or perhaps arranged according to an optimized pattern) in order to prevent grating lobes, a problem with interference patterns that is common with phased arrays. They would link with GPS and each other by radio on multiple bands to achieve "micron-precision thinsat location and orientation within the array".
According to the wiki, the most likely technical show-stopper (which makes sense given the fact that m288 is outside of the inner Van Allen belt) is radiation damage. Proposed fixes include periodic annealing (heating the circuit with a heating element) to repair the damage, and the use of radiation-resistant materials for circuitry.
Has anyone else here researched this idea, or have relevant knowledge? It seems like a great potential source of computing power for AI research, mind uploads, and so forth, but also for all those mundane, highly lucrative near term demands like web hosting and distributed business infrastructures.
From an altruistic standpoint, this kind of system could reduce poverty and increase equitable distribution of computing resources. It could also make solving hard scientific problems like aging and cryopreservation easier, and pave the road to solar power satellites. As it scales, it should also create demand (as well as available funding and processing power) for Launch Loop construction, or some other similarly low-cost form of space travel.
Value of information as to whether it can work or not therefore appears to be extremely high, something I think is crucial for a rationalist project. If it can work, the value of taking productive action (leadership, getting it funded, working out the problems, etc.) should be correspondingly high as well.
Update: Keith Lofstrom has responded on the wiki to the questions raised by the satellite scientist.
Note: Not all aspects of the project have complete descriptions yet, but there are answers to a lot of questions in the wiki.
Here is a summary list of questions raised and answers so far:
- How does this account for Moore's Law? (kilobug)
In his reply to the comments on Brin's post, Keith Lofstrom mentions using obsolete sats as ballast for much thinner sats that would be added to the arrays as the manufacturing process improves. Obsolete sats would not stay in use for long. - What about ping time limits? (kilobug)
Ping times are going to be limited (70ms or so), and worse than you can theoretically get with a fat pipe (42ms), but it is still much better than you get with GEO (250+ ms). This is bad for high frequency trading, but fine for (parallelizable) number crunching and most other practical purposes. - What kind of power consumption? Doesn't it cost more to launch than you save? (Vanvier)
It takes roughly 2 months for a 3 gram thinsat to pay for the launch energy if it gets 4 watts, assuming 32% fuel manufacturing efficiency. Blackbody cooling is another benefit. - Bits being flipped by cosmic radiation is a problem on earth, how can it be solved in space? (Vanvier)
Flash memory is acknowledged to be the most radiation sensitive component of the satellite. The solution would involve extensive error correction software and caching on multiple satellites. - Periodic annealing tends to short circuits. Wouldn't this result in very short lifetimes? (Vanvier)
Circuits will be manufactured as two dimensional planes, which don't short as easily. Another significant engineering challenge: Thermal properties in the glass will need to be matched with the silicon and wires (for example, slotted wiring with silicon dioxide between the gaps) to prevent circuit damage. Per Vanvier, it may be less expensive to replace silicon with other materials for this purpose. - What are the specific of putting servers in space? (ZankerH)
Efficient power/cooling, increased communications, overall scalability, relative lack of environmental impact.
Yet to be answered:
- Is the amount of speculative tech too high? E.g. if future kinds of RAM are needed, costs may be higher. (Vanvier)
- Is it easier to replace silicon with something else than find ways to make the rest of the sat match thermal expansion of silicon? (Vanvier)
- Can we get more data on economics/business plan? (Vanvier)
- Solar sails have been known to stick together. Is this a problem for thinsats, which are shipped stuck together? (Vanvier)
- Do most interesting processes bottleneck on communication efficiency? (skelterpot)
- What decreases in cost might we see with increased manufacturing yield? (skelterpot)
Insightful comments:
- Launch energy vs energy collection (answer above is more specific, but this was a commendable quick-check). (tgb)
- ECC RAM is standard technology used in server computers. (JoachimShipper)
- Fixing bit errors outside the memory (e.g. in CPU) is harder, something like Tandem Computers could be used, with added expense. (JoachimShipper)
- Some processor-heavy computing tasks, like calculating scrypt hashes, are not very parallelizable. (skelterpot)
- Other approaches like redundant hardware and error-checking within the CPU are possible, but they drive up the die area used. (skelterpot)
The biggest is solar flares and coronal ejections. Not your normal day to day solar winds and stuff, but honking great gouts of energy and/or coronal mass that are flung out.
Another is comms. How many people would use each server? Really? What's the bandwidth divided by users? From the ground going up you either need a well aimed dish or a honking lot of power (or a shitload of antenna topside)
Third is Putting lots of these up isn't a wonderful idea as they will obsolete quickly and need to be replaced. You then have the choice to de-orbit them (wasteful) or leave them up there (danger to navigation).
Fourth, fifth and sixth are security, security and security. How do you apply a security patch to something Up There. Yes, it can be done, but what happens if you Brick It. Back to more engineering and more redundancy and etc. Up goes the cost, up goes the size, down goes the payback. How do you prevent eavesdropping on your communications with it? And with it's communication with you? Encryption? that's either more CPU cost, or more payload to use special processors. How do you protect it from deliberate interference from various organisations that wish to compromise it. etc. etc.
While the lure of "free" energy (or more accurately low cost, reliable energy) is compelling, there's a LOT of ways to get many of those benefits terrestrially. The notion of something like this "reducing poverty" is silly. While there is a lot that processing power can do to reduce poverty, access to raw computing resources ISN'T one of them. You'd be better off deploying something like wifi to GSM connections and building effective mesh network protocols. A for various kinds of research there's plenty of unused power here on this planet. See this: https://flowcharts.llnl.gov/content/energy/energy_archive/energy_flow_2010/LLNLUSEnergy2010.png If you really want to solve a useful problem in this area turning that "rejected energy" into CPU cycles. You've got about 26 Quads of waisted energy JUST from electrical distribution.
That's a metric buttload of entropy that could be put to work solving problems if we could figure out how to get to it. CPU cycles that are close, easily upgradeable, easily defendable, easily recoverable when they die or get obsolesced.
(I have this pet notion that landfills are GOOD places to put stuff we can't use right now, because some day some smart bloke is going to figure out how to use nanotech of some kind to (diamond age style) sort the component bits right out and we'll KNOW where all the high density sources are because we'll have been dumping our old crud in there for a hundred years. Now if we can just keep the crap out of our ground water....)
Overall it can be done, and if you want to deliver compute resources to station owners or Aboriginal Communities in the outback, or nomadic tribes in the desert regions of the world it might be cost effective. But those folks would need a local computer and network to access it, so why not just give them a slightly beefier lapdog and use some sort of distributed compute engine?
I have a tendancy to agree with Mr. Gerard and Mr. Cunningham on this though--while it's an interesting technical exercise and I (clearly) don't mind talking about it, it's not the sort of thing I come here for.
Oh, I thought it was fine for discussion as popular science with slight local relevance.